Prosecution Insights
Last updated: April 19, 2026
Application No. 18/442,786

METHODS FOR SELF-ADAPTIVE TIME SERIES FORECASTING, AND RELATED SYSTEMS AND APPARATUS

Non-Final OA §103
Filed
Feb 15, 2024
Examiner
HOTALING, JOHN M
Art Unit
3992
Tech Center
3900
Assignee
Datarobot Inc.
OA Round
1 (Non-Final)
73%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
81%
With Interview

Examiner Intelligence

Grants 73% — above average
73%
Career Allow Rate
54 granted / 74 resolved
+13.0% vs TC avg
Moderate +8% lift
Without
With
+8.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
11 currently pending
Career history
85
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
34.9%
-5.1% vs TC avg
§102
6.6%
-33.4% vs TC avg
§112
39.4%
-0.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 74 resolved cases

Office Action

§103
NON-FINAL OFFICE ACTION This Office Action is a Reissue of U.S. Application No. 16/506,219 (the ‘219 application) now U.S. Patent No. 11,250,449 B1 issued on Feb. 15, 2022 to Bledsoe et al. (the ‘449 patent). The status of the claims amended on 5/21/2021 is as follows; Claims 1-22 are pending. Claims 21 and 22 are new. Claims 1-20 are original Claims 21 and 22 are rejected. Claim Rejections - 35 USC § 251 Claims 21 and 22 are rejected under 35 U.S.C. 251 as being in violation of the original patent requirement. Section 251 requires that reissue is for "the invention disclosed in the original patent." In order to satisfy the original patent requirement, "[i]t must appear from the face of the instrument that what is covered by the reissue was intended to have been covered and secured by the original." U.S. Indus. Chems., Inc. v. Carbide & Carbon Chems. Corp., 315 U.S. 668,676 (1942). Furthermore, "it is not enough that an invention might have been claimed in the original patent because it was suggested or indicated in the specification." Id. In other words, the original patent "must clearly and unequivocally disclose the newly claimed invention as a separate invention." Antares Pharma, Inc. v. Medac Pharma Inc., 771 F.3d 1354, 1362 (Fed. Cir. 2014). The reissue declaration states that the following preliminary amendment ”including claims of broader scope than issued claims 1, 9, and 19 is submitted. For example, issued independent claims 1 and 19 require “data content indicative of a time series,” and issued independent claim 9 requires “the set of forecasted values indicates time series.” New independent claims 21 and 22 correct this error by not including the requirement of “data content indicative of a time series” or the requirement of “the set of forecasted values indicates time series.” The examiner notes that claim 21 requires “receiving data indicating a plurality of first values associated with a variable”. In the specification the term variable is associated with “Some examples of computed time series characteristics include the number of time series samples or observations, determination of predictor variables (e.g., exogenous variables) relevant to the forecast of time series data points, sparseness of time series data points, variability of time series data points, autocorrelation of selected lags of a time series, partial autocorrelation of selected lags of a time series and other suitable time series characteristics.” The examiner finds that the term “variable “ has a time series value indicative of a time series. Throughout the specification “exogenous variable” are all indicative of a time series. Appropriate correction is required. The examiner notes that claim 22 requires the following terms “training data” and “a plurality of values”. After a detailed reading of the specification the examiner notes that “training data” is associated with a time series see C11:L3-15 reproduced below. (53) Forecaster output interface 213 enables, for example, the display of visualization tools for the understanding and estimated values of time series. For example, in some implementations, forecaster output interface can be a graphical user interface displaying a comparison between forecasted values and observed values over time. Similarly, a graphical user interface can display information regarding selection of training data set and testing datasets (as shown, for example, in FIG. 8), forecast accuracy scores of entrant forecasting models (as shown in FIG. 11), projected values for a time series and other suitable information regarding processed at TSF server 101. The term “a plurality of values” also has a time series associated with the value. See C4:L41-61. The examiner finds that the original patent specification discloses that the term “variable“ from claim 21 has a time series and has values indicative of a time series as described above. Throughout the specification “exogenous variable” are all indicative of a time series. Claim 22 describes “training data” and the term “a plurality of values” also has a time series associated with the respective training data and plurality of values, This is confirmed by how the invention was claimed during the original prosecution. All of the original and issued claims disclose content indicative of a time series. None of the original and issued claims provide the configuration that is different with regards to a time series of events. New independent claims 21 and 22 do not disclose a time series of events however all of the variables or values in the claims are associated with a time series of events. This situation is also somewhat analogous to the recent Federal Circuit decision in Forum US, Inc. v. Flow Valve, LLC, Appeal No. 2018-1765 (Fed. Cir. Jun. 17, 2019). In Forum US, the original patent claims were drawn to a workpiece having a body member and a plurality of arbors (arbors circled): PNG media_image1.png 252 577 media_image1.png Greyscale Forum US, slip op. at 3-4. In reissue, patentee broadened the claims to remove the requirement as to arbors. Id. at 5. Patentee and its expert argued that a person of ordinary skill in the art would understand that the invention included embodiments both with and without arbors. Id. at 6. The Federal Circuit determined that the new claims did not comply with the original patent requirement of section 251 because the face of the patent did not disclose any arbor-less embodiment, and the abstract, summary of invention, and all disclosed embodiments included arbors. Id. at 9. The Court concluded that the specification did not clearly and unequivocally disclose an embodiment without arbors, thus the original patent requirement was violated by including claims that did not require arbors. Id. at 10. By removing the requirement of a time series of data is analogous to removing the arbor in the example above. The ‘449 patent here does not clearly and unequivocally disclose any embodiment that does not use a time series of events for its variables or values as discussed above. To overcome this rejection, claims 21 and 22 must be amended to include at least some variables or values that are not subject to a time series and provide support in the specification for an embodiment that expressly teaches not using a time series dependent variable or value. It is understood that removal of the “time series of events” appears to have been the entire purpose of this reissue, thus the claims may be amended such that the variables or values do not represent a time series. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Averboch ‘337. 21. (New) An apparatus, comprising: a processor; and a memory storing instructions which, when executed by the processor, cause the processor to perform actions including: A processor and memory are disclosed in ‘377 C11:23-C13:52 which describes all available computing systems and memory and devices used for the computer system 600 in figure 6. See also C1:L60-C2:L7 that discloses processor-readable storage medium having encoded therein executable code of one or more software programs, wherein the one or more software programs when executed by one or more processing devices implement the steps of receiving data on which to base a model. receiving data indicating a plurality of first values associated with a variable; determining a characteristic of the data; The ‘449 specification discloses that “In some implementations, one or more of the variables (fields) shown in the dataset 400 and/or other additional variables can be determined by TSF system 101 through metadata extraction associated with a dataset. Some examples of metadata that can be obtained to complement dataset 400 include, descriptions of products, categories of products, types of products, relationship among products, product sizes, product colors, promotional information, labor costs, manufacturing costs, weather values at a given time, e.g., time of selling transaction, demographics for a store at the point of sale, and other suitable data. Therefore Averboch ‘337 discloses that “In accordance with an embodiment of the present invention, the probabilistic statistical analysis, to determine which, if any, of the existing models and their associated data are close enough, is performed using specialized language technology components, including, but not necessarily limited to feature extraction modules, natural language processing (NLP) and natural language understanding (NLU) components, and/or other specialized modules that use machine learning approaches, including, but not necessarily limited to, Maximum Entropy Classification, Conditional Random Fields, and Deep Neural Networks (DNNs). With the help of feature extraction, the machine learning algorithms can be used to predict a class of a given new data set and, as a result, provide probability. The probability represents how closely related the existing models are to the new data set.” ‘377 C3:L34-49. This is receiving data with values associated with a variable and determining the characteristic of the data. selecting, based on the characteristic, a set of candidate models from a plurality of forecasting models; “Depending on the recommendation of the model controller 110, a user may choose different options when proceeding to develop a model for newly received data. For example, in response to a recommendation to use an existing model as the model, a user can select to use a particular existing model as the model. In response to a recommendation to merge, a user can select two or more of the existing models to be merged to develop the model for the received data. In the case of merging, the model controller 110 marks data corresponding to each of the selected models with a corresponding original model category, and merges the data corresponding to the selected models. The results of the marking are stored in a provenance component 122 included in the database 120.” C7:L30-45 training the set of candidate models using the data to produce a set of trained models, wherein training each candidate model in the set of candidate models includes fitting the respective candidate model to at least a portion of the data using a machine learning algorithm; The feature extraction component 112 extracts meaningful cues from raw data and transforms the data into a structure that machine learning algorithms can understand. Machine learning components use the transformed data from the feature extraction component 112 to train a new model or use an existing model. In accordance with an embodiment of the present invention, the specialized language technology components 111 of the model controller 110 further include, but are not necessarily limited to, machine learning components, such as the NLP/NLU component 113, and, to support NLP/NLU, the maximum entropy classification (MEC) module 114 using MEC, the conditional random fields (CRF) module 115 using CRF, and the deep neural networks (DNNs) module 116 using deep learning techniques. The NLP/NLU component 113 comprises rule-based analysis modules, machine learning modules, or both rule-based analysis and machine learning modules depending on the role of NLP/NLU in a target application. With the help of feature extraction, the machine learning algorithms are used to predict a class of a given new data set and, as a result, provide probability. The probability represents how closely related the existing models are to the new data set. ‘377 C6:L20-42. This is a Machine Learning Algorithm (MLA) with a variable the feature extraction component used to train a new or use an existing trained model. The examiner views this as training each model since each new model must be trained before it can be an existing model and as such all of the models were trained before use. generating a plurality of second values, wherein generating the plurality of second values includes executing a plurality of the trained models in the set of trained models; and Based on the probabilistic analysis, the model controller 110 provides a user, via the input/output module 140, with the existing models that can be used as the model or as a basis to develop the model for the received data. In providing a user with the existing models that can be used as the model or as a basis to develop the model, the model controller 110 includes a specialized recommendation component 118 that can recommend models to be used as the model, recommend two or more of the existing models that can be merged to develop the model for the received data, and/or recommend one or more of the existing models that can be divided to develop the model for the received data. When recommending one or more of the existing models that can be divided, the recommendation component 118 uses one or more clustering algorithms by looking into highly correlated problem classes. selecting at least one trained model from the plurality of trained models based on an evaluation of the plurality of second values. Depending on the data, and the results of the probabilistic analysis, the model controller 110 may be unable to make any recommendation regarding whether an existing model can be used as a model or as a basis to develop the model for newly received data or that none of the plurality of existing models can be used as the model or as a basis to develop the model for the received data. In a scenario where the results of the probabilistic analysis are inconclusive, the model controller 110 may request that a user carefully consider whether to use an existing model, merge or divide existing models to create another model, or create a new model independent of an existing model. According to an embodiment of the present invention, the model controller 110 provides a specialized recommendation to a user in this scenario. For example, the model controller 110 transmits to the user, via the input/output module 140 and a user device 150, what the model controller 110 determines to be the most useful and relevant data for the user to make an informed decision, including, but not limited to, class probabilities and semantic/meaning correlation scores. … In accordance with an embodiment of the present invention, the model controller 110 is configured to re-use models and measure new inputs (e.g., unseen events) against all models in the database 120 and suggest existing models that best match the new input(s). The model controller 110 allows for the creation of new models, tracks models and all the data associated with the models, and allows creating of new models by merging or dividing existing models. ‘377 C9:L4-42 As best as can be determined by the examiner a second value may be class probabilities and semantic/meaning correlation scores. Suggest existing models or creating new models based on evaluation of second values such as class probabilities and semantic/meaning correlation scores as disclosed above. Furthermore, the abstract states that the model controller is further configured to provide a user, via the input/output module, with the existing models that can be used as the model or as a basis to develop the model for the received data which is selecting a module based on an evaluation of the second values. 22. (New) A method comprising: performing, by a processor, an evaluation of an incumbent model, the incumbent model being fitted to training data using a machine learning algorithm; The feature extraction component 112 extracts meaningful cues from raw data and transforms the data into a structure that machine learning algorithm can understand. Machine learning components use the transformed data from the feature extraction component 112 to train a new model or use an existing model. In accordance with an embodiment of the present invention, the specialized language technology components 111 of the model controller 110 further include, but are not necessarily limited to, machine learning components, such as the NLP/NLU component 113, and, to support NLP/NLU, the maximum entropy classification (MEC) module 114 using MEC, the conditional random fields (CRF) module 115 using CRF, and the deep neural networks (DNNs) module 116 using deep learning techniques. The NLP/NLU component 113 comprises rule-based analysis modules, machine learning modules, or both rule-based analysis and machine learning modules depending on the role of NLP/NLU in a target application. With the help of feature extraction, the machine learning algorithm are used to predict a class of a given new data set and, as a result, provide probability. The probability represents how closely related the existing models are to the new data set. ‘377 C6:L20-42 emphasis added by the examiner selecting, by a processor and based on the evaluation, one or more candidate models from a plurality of candidate models; (49) As an alternative to making recommendations for a user to choose different options, the model controller 110 can automatically determine how to develop a model for the newly received data (e.g., use an existing model as the model, merge or divide existing models, or create a new model) and automatically execute further processing based on the determination. … FIG. 2 is a flow diagram of a process for model management wherein using existing models is proposed, according to an exemplary embodiment of the invention. Referring to FIG. 2, the process 200 includes, at block 201, receiving new data on which to base a model. The received data is evaluated against existing models and associated data (block 203), and based on, for example, probabilistic statistical analysis, existing models that can be used as a model for the new data are returned (block 205). The process further includes, at block 207, providing model candidates that can be used as a model for the new data to a user, and, at block 209, selecting an existing model to use from the candidates. Alternatively, a system may perform the selection of an existing model to use without user input. generating, by a processor, a plurality of values, wherein generating the plurality of values includes executing the one or more candidate models; Based on the probabilistic analysis, the model controller 110 provides a user, via the input/output module 140, with the existing models that can be used as the model or as a basis to develop the model for the received data. In providing a user with the existing models that can be used as the model or as a basis to develop the model, the model controller 110 includes a specialized recommendation component 118 that can recommend models to be used as the model, recommend two or more of the existing models that can be merged to develop the model for the received data, and/or recommend one or more of the existing models that can be divided to develop the model for the received data. When recommending one or more of the existing models that can be divided, the recommendation component 118 uses one or more clustering algorithms by looking into highly correlated problem classes. selecting, by a processor, a candidate model from the one or more candidate models based on an evaluation of the plurality of values; Depending on the data, and the results of the probabilistic analysis, the model controller 110 may be unable to make any recommendation regarding whether an existing model can be used as a model or as a basis to develop the model for newly received data or that none of the plurality of existing models can be used as the model or as a basis to develop the model for the received data. In a scenario where the results of the probabilistic analysis are inconclusive, the model controller 110 may request that a user carefully consider whether to use an existing model, merge or divide existing models to create another model, or create a new model independent of an existing model. According to an embodiment of the present invention, the model controller 110 provides a specialized recommendation to a user in this scenario. For example, the model controller 110 transmits to the user, via the input/output module 140 and a user device 150, what the model controller 110 determines to be the most useful and relevant data for the user to make an informed decision, including, but not limited to, class probabilities and semantic/meaning correlation scores. … In accordance with an embodiment of the present invention, the model controller 110 is configured to re-use models and measure new inputs (e.g., unseen events) against all models in the database 120 and suggest existing models that best match the new input(s). The model controller 110 allows for the creation of new models, tracks models and all the data associated with the models, and allows creating of new models by merging or dividing existing models. ‘377 C9:L4-42 emphasis added by examiner As best as can be determined by the examiner a second value may be class probabilities and semantic/meaning correlation scores. Suggest existing models or creating new models based on evaluation of second values such as class probabilities and semantic/meaning correlation scores as disclosed above. Furthermore, the abstract states that the model controller is further configured to provide a user, via the input/output module, with the existing models that can be used as the model or as a basis to develop the model for the received data which is selecting a module based on an evaluation of the second values. replacing, by a processor, the at least one incumbent model with the selected candidate model based on the evaluation indicating a superior fitness and/or accuracy of the selected candidate model over the incumbent model. 41) The model controller 110 relies on statistical probabilistic analysis to obtain objective measures for recommending merging or dividing models. Once a user or the system chooses to merge multiple models and data, the model controller 110 marks original labels of the models and data prior to the merger to maintain a history or provenance of events and creates the combined data and the corresponding model. The results of the marking are stored in a provenance component 122 included in the database 120. (42) In such scenarios, the data is merged after marking an original model category. Then, the original models are replaced by a new combined class model representing a merged class. The combined class model is obtained by retraining an entire classification model using a machine learning methodology. (43) In response to a recommendation to divide existing models, a user can select one or more of the existing models to be divided to develop the model for the received data. In the case of dividing, like with merging, the model controller 110 marks data corresponding to each of the selected models with a corresponding original model category, and divides the data corresponding to the selected models into a plurality of categories. The results of the marking are stored in a provenance component 122 included in the database 120. (44) In accordance with an embodiment of the present invention, the model controller 110 recommends the division of the data when the model controller 110 finds that the user merged two instances of highly uncorrelated data by mistake (i.e. a human error). The model controller 110 identifies such scenarios by using unsupervised clustering algorithms and its cluster probabilities. The model controller 110 also uses semantic/meaning correlations between the data before merging and after merging. (45) Once merging or dividing existing models is selected, in accordance with an embodiment of the present invention, the model controller 110 divides the merged or divided data into test data and training data, and trains the model for the received data. It is to be understood that although the model controller 110 is described as performing functions, such as dividing the merged or divided data into test and training data, and training the model, the embodiments of the invention are not necessarily limited thereto, and that other components, such as, for example, the new problem trainer 130 can be used to perform the functions of dividing the merged or divided data into test and training data, and training the model, or other functions. (46) Depending on the data, and the results of the probabilistic analysis, the model controller 110 may determine that none of the plurality of existing models can be used as the model or as a basis to develop the model for the received data, and that the model for the received data be developed independent of the plurality of existing models. Then, the new problem trainer 130, which is operatively connected to the model controller 110, collects crowdsourced data for the new model. 52) In accordance with an embodiment of the present invention, the model controller 110 is configured to re-use models and measure new inputs (e.g., unseen events) against all models in the database 120 and suggest existing models that best match the new input(s). The model controller 110 allows for the creation of new models, tracks models and all the data associated with the models, and allows creating of new models by merging or dividing existing models. The above (emphasis added by the examiner) is from ‘377 C7:L53-C8:L40, C9:L35-43. Figures 2-5 disclose how a model is chosen based on the various results that evaluate data against existing models and associated data or new data and a new model based on the first result and the second result above such as “new inputs” as taught above. The statistical probabilistic analysis to obtain objective measures for recommending merging or dividing models is the evaluation based on a superior fitness and/or accuracy of the selected candidate model that best match the new inputs. Allowable Subject Matter Claims 1-20 are allowed. The following is an examiner's statement of reasons for allowance: The prior art references most closely resembling Applicant’s claimed invention are: The prior art references most closely resembling Applicant’s claimed invention are Sarferaz (US Application No. 20160005055) in view of Grichnik (US Application No. 20150154619). Sarferaz provides a generic tine series forecasting system that can optimize the parameters for a predictive algorithm and generate predicted or forecast tine series data such that the end-user simply needs to identify the current and/or historical performance data, an initial training and continuous adaption of one or more includes a time series historical demand data for 4 product at a product distribution node ina supply chain over N number of historical time periods, with N being an integer greater than 1, developing a plurality of forecasting models based on the historical demand data over the ?.sup.st through (N-1) the historical time periods, and determining, for each one of sub periods in the Nth historical time period, a set of weighting values for the forecasting models by implementing a genetic algorithm. With reference to claim 1, Sarfaraz, Grichnik and other prior art of record, neither teaches nor renders obvious the limitations “training each entrant forecasting mode! from the set of entrant forecasting models using the data content indicative of the time series to produce a set of trained entrant forecasting models, wherein training each entrant forecasting model includes filing the respective entrant forecasting model from al isast a portion of the data content using a machine learning algorithm: instantiating, in a memory, 4 data structure with 4 set of forecasted values generated by at least one execution of each trained entrant forecasting model from the set of trained entrant forecasting models, the set of forecasted values indicating estimations of the descriptive values associated with the feature of the entity; and selecting at least one forecasting model from the set of trained entrant forecasting models based on an accuracy evaluation of each forecast value from the set of forecasted values’. With reference to currently amended claim 9, Sarferaz, Grichnik and other prior art of record, neither teaches nor renders obvious the limitations “instantiating, via the processor and in the memory, a data structure with a set of forecasted values generated by an execution of each entrant forecasting model from the set of entrant forecasting models, the set of forecasted values indicates time series with descriptive values of a feature associated with an entity feature: and replacing, via the processor, the at least one incumbent forecasting model with at least one elected forecasting model selected from the set of entrant forecasting models based on at least one forecast model measure, the a least one forecast model measure indicating a superior fitness and/or forecasting accuracy of the at least one elected forecasting model over the at least one incumbent forecasting model’. With reference to currently amended claim 19, Sarfaraz, Grichnik and other prior art of record, neither teaches nor renders obvious the limitations “training the set of entrant forecasting models with the data content included in the sample dataset to produce a set of trained entrant forecasting models, wherein training the set of entrant forecasting model includes filling each entrant forecasting model io at least a portion of the data content using a machine learning algorithm: calculating a set of fitness values that includes at least one fitness measurement value for each trained entrant forecasting model from the set of trained entrant forecasting models; selecting a trained entrant forecasting model from the set of trained entrant forecasting models as an elected forecasting model, based at least in part on the set of fitness values; and executing the elected forecasting model to receive datasets, from a plurality of monitored data sources, the datasets include data content indicative of time series with descriptive values associated with the feature of the entity. Conclusion Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN M HOTALING II whose telephone number is (571)272-4437. The examiner can normally be reached 730-4 Monday -Friday. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew J. Fischer can be reached on 571 272 6779. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN M HOTALING II/Reexamination Specialist, Art Unit 3992 Conferees: /C. Michelle Tarae/Reexamination Specialist, Art Unit 3992 /ANDREW J. FISCHER/Supervisory Patent Examiner, Art Unit 3992
Read full office action

Prosecution Timeline

Feb 15, 2024
Application Filed
Dec 17, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent RE50819
METHOD AND APPARATUS FOR VIDEO-ENCODING/DECODING USING FILTER INFORMATION PREDICTION
2y 5m to grant Granted Mar 10, 2026
Patent RE50793
METHODS AND SYSTEMS FOR DELIVERY OF MULTIPLE PASSIVE OPTICAL NETWORK SERVICES
2y 5m to grant Granted Feb 10, 2026
Patent RE50665
Apparatus and Method for Automated Vehicle Roadside Assistance
2y 5m to grant Granted Nov 18, 2025
Patent RE50584
RESOURCE ALLOCATION
2y 5m to grant Granted Sep 09, 2025
Patent RE50571
LITHOGRAPHIC APPARATUS, DEVICE MANUFACTURING METHOD, AND METHOD OF CORRECTING A MASK
2y 5m to grant Granted Sep 02, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
73%
Grant Probability
81%
With Interview (+8.2%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 74 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month