DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Remarks
The present application having Application No. 18/107,296 filed on 02/08/2023 presents claims 1-20 for examination.
Examiner Notes
Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 02/08/2023 is acknowledged, the submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Drawings
The applicant’s drawings submitted are acceptable for examination purposes.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-20 are rejected under AIA 35 U.S.C. 103 as being unpatentable over Mahamuni et al. (US 2023/0018199 A1) (hereinafter Mahamuni) in view of Saha et al. (US 2020/0264927 A1) (hereinafter Saha).
As per claim 1, Mahamuni discloses A computer-implemented method (e.g. Mahamuni: [Abstract] [0003] discloses systems, methods and computer programming products for managing/scheduling batch jobs. [0031] discloses the present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. Also see [Figs. 2-3 and 7-8].) comprising: obtaining historical job execution-related data for one or more previous batch jobs in at least one cloud environment and resource utilization-related data for one or more pending batch jobs in the at least one cloud environment (e.g. Mahamuni: [Abstract] [0003] discloses storing batch job parameters, messages and logs in knowledge bases that may be inputted into AI models. The computer implemented-method comprises steps of creating a knowledge base including an archive of failed batch job histories comprising time series data logs, messages and invoked processes associated with failed batch jobs. [0020] discloses creating knowledge bases containing records comprising time series data that include a history of successful batch jobs and messages associated therewith, time stamps and average time for the job to be successful. A second knowledge base or corpus can include logs of the unsuccessful jobs, and corresponding error messages for historically identified failed jobs. [Fig. 2][0073] discloses knowledge bases 243. [0078-0082] The first knowledge corpus may archive data chronicling the successful completion of batch jobs. The first knowledge corpus comprising archive data of successful batch jobs may comprise time series data tracking system logs, messages with relative time stamps, average time for completion, process level information, and the system metrics for the mainframe during the execution of the batch jobs. A second corpus may archive data chronicling failed batch jobs, the second corpus may include time series data of unsuccessful job logs of failed batch jobs, along with corresponding error messages, and system metrics measured during the execution of the failed batch jobs. The batch management module may gather relevant metrics and data during the processing of batch jobs and build the archived data of the knowledge base, including both the first corpus comprising information associated with the successful batch jobs and the second corpus describing the parameters of surrounding failed batch jobs. These knowledge bases are batch job history corpora maintained in a mainframe/cloud computing environment, thereby teaching obtaining historical batch job execution-related data for previous batch jobs in a cloud environment. Also see [Figs. 2-5, 6A and related description]. These knowledge bases contain time series data that includes a history of both successful and unsuccessful batch jobs and messages associated therewith, time stamps and average time, etc. This constitutes historical job execution-related data for previous batch jobs. Mahamuni also discloses a metric module collects performance and system metric and stores them with job histories in knowledge bases. These collected metrics are resource/utilization data associated with the infrastructure and jobs, including queued (pending) jobs in a cloud/mainframe environment. Thus, Mahamuni teaches obtaining both historical job execution related data and resource utilization-related data for previous and pending batch jobs.); predicting one or more execution outcomes for the one or more pending batch jobs in the at least one cloud environment by processing at least a portion of the historical job execution-related data and at least a portion of the resource utilization-related data using one or more artificial intelligence techniques (e.g. Mahamuni: [Abstract] [0003] discloses systems/methods/computer programming products for predicting failures of batch jobs being executed and queued for processing at future time based on batch jobs historical data stored in knowledge bases. The data from knowledge bases are inputted into AI models for analysis. Using predictive analytics and/or machine learning, batch jobs failures are predicted. Mappings of processes used by each batch job, historical data from previous batch jobs are used to predict success or failure. [0019-0020] the embodiments leverage the use of predictive analytics and machine learning with the processing of batch jobs to analyze batch job parameters and predict batch job failures for both currently running batch jobs and batch jobs within a job queue which have not yet been picked up for processing. The embodiments predict batch job failures of pending/queued batch jobs based on historical batch jobs related data and collected system logs/metrics stored in knowledge bases. [0021-0022] based on historical data collected and archived, batch jobs in the job queue can be analyzed and predictions can be made whether or not the queued jobs are expected to fail. The disclosure may integrate the use of AI or machine learning (cognitive computing) to analyze and predict batch job failures of the queued batch jobs. [0085] The historical archive of collected data from both successful and/or unsuccessful histories of batch jobs, along with user feedback can be applied to making future predictions about currently executing batch jobs and queued batch jobs scheduled to be executed by batch applications 210. Embodiments of the knowledge base 243 may perform automated deductive reasoning, machine learning or a combination of processes thereof to predict future batch job failures. [Figs. 2-5, 6A, 6B, 7-8] [0006-0013] [0043] [0070] [0082-0084] [0088-0090] [0093] [0095] [0097] [0100] [0108-0114] [0118-0120] discloses using machine learning or AI models/engines to predict queued/pending batch job failures based on historical data archived within the knowledge bases. Thus, Mahamuni teaches predicting execution outcomes (success/failure) for pending/queued batch jobs in a cloud environment based on historical execution-related data and resource utilization-related data using artificial intelligence techniques (ML/AI) models.); performing one or more automated actions based at least in part on the one or more predicted execution outcomes and the one or more estimated temporal durations (e.g. Mahamuni: [Abstract] [0003] discloses predicting batch job failures based on parameters, messages and system logs stored in knowledge bases. Using historical data and data identifying the success or failure of batch job, AI model predictively recommend actions to remediate failures from occurring. Recommended actions are reported to the system admin or automatically applied. [0020-0022] discloses creating knowledge bases containing records comprising time series data that includes time stamps and average time for the job to be successful and time series data for unsuccessful or failed batch jobs. These time series histories are used to predict failures and expected process paths. Based on historical data collected and archived, batch jobs in the queue can be analyzed and prediction can be made whether or not the queued jobs are expected to fail. Potential/predicated batch job failures can be proactively flagged and remediation steps can be recommended or automatically implemented. Appropriate action on the batch job expected to fail can include terminating the entire batch jobs, restarting failing batch, holding batch execution, fixing the failing process and allowing the remainder of the batch to run thereafter. [0078-0079] discloses knowledge corpuses may archive data chronicling the successful completion of batch jobs, the archived data comprises time series data tracking system logs, messages with relative time stamps, average time for completion of the batch jobs. [0088] discloses predicting batch job failures of queued batch jobs and predicting remedial actions to alleviate predicted batch job failures based on historical data stored in the knowledge bases and fact database. The information stored in the knowledge base and fact database are used to reach one or more conclusions and implement an action. Also see [0090] [0092] [0094] [0113-0118] [Figs. 6B and 7].); wherein the method is performed by at least one processing device comprising a processor coupled to a memory (e.g. Mahamuni: [Fig. 1] [0024] discloses computing device comprising processors coupled to memory. [Abstract] [0003] discloses systems, methods and computer programming products for managing/scheduling batch jobs. [0031] discloses the present invention may be a system, a method, and/or a computer program product at any possible technical detail level of integration. Also see [Figs. 2-3 and 7-8].).
As discussed above, Mahamuni discloses archiving time series data related to batch jobs including average time for completion for the batch jobs (e.g. Mahamuni: [0020][0078] [0080]). Mahamuni’s capturing of average time for the job to be successful within the historical job data, together with the discloses AI/machine learning models operating on those histories and metrics, implies that the Mahamuni’s AI/ML framework is suitable to determine execution time (temporal durations) for the pending batch jobs. Mahamuni does not expressly disclose estimating one or more temporal durations associated with executing the one or more pending batch jobs in the at least one cloud environment.
However, Saha discloses estimating one or more temporal durations associated with executing the one or more pending batch jobs in the at least one cloud environment by processing the at least a portion of the historical job execution-related data and the at least a portion of the resource utilization-related data using the one or more artificial intelligence techniques (e.g. Saha: [Abstract] [0007-0008] discloses systems/methods for facilitating runtime predictions for cloud-computing automated tasks. A predictive model may predict the automated task run time based on historical run time to completion, and the runtime may be updated using machine learning. [Fig. 4] [0038-0039] discloses parametrizing a predictive model that can be used to generate an estimated time for completion of one or more automated tasks that may be performed on a resource of a cloud computing system. The predictive model is parameterized by historical task data populated with record corresponding to prior automated tasks, e.g., actual run or execution times for the automated tasks and one or more factors that may be used to characterize a given run or execution of an automated task that may impact a runtime of the respective automated task. Such factors may be related to a given run of an automated task may include the resource or resource upon which the task is performed, a time of day, network condition or characteristics, available computing resources and so forth. The historical task database includes data that can be used to associate a given task with one or more factors that may be relevant in modeling task run times for new or upcoming automated task, such as to generate an estimated run time for such automated tasks based on the nature of the task and other factors related to the planned execution of the task that may be used in parametrizing the predictive model. Also see [0041-0043] [0045] [0047] [0050-0051] [Figs. 4-5 and related description]. Thus, Saha expressly discloses a historical task database that stores historical run time to completion or execution time for prior automated tasks along with one or more factors including the resources to execute the automated task. Using this historical database, a predictive model predicts the automated task runtime using machine learning techniques.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the teaching of estimating execution times of incoming or pending automated tasks based on stored historical execution times of previous automated tasks and factors including resource utilization characteristics using machine learning model or techniques as taught by Saha into Mahamuni because the disclosed machine learning techniques predict precise automated task execution times that may allow for a proactive approach in reserving resources for the actual duration necessary to complete an automated task. In this manner, a resource may not be reserved unnecessarily and it would result in efficient use of computing resources (See Saha: [0045] [0050-0051]).
As per claim 2, the combination of Mahamuni and Saha discloses The computer-implemented method of claim 1 [See rejection to claim 1 above], wherein predicting one or more execution outcomes comprises processing the at least a portion of the historical job execution-related data and the at least a portion of the resource utilization-related data using at least one deep neural network comprising multiple branches of network for multiple types of outputs, wherein a first of the multiple branches is associated with predicting the one or more execution outcomes for the one or more pending batch jobs (e.g. Mahamuni: [0022] discloses using AI or machine learning techniques to analyze and predict batch job failures of batch jobs in queue. Embodiments may input data into an AI engine and use continuous or active learning to fine tune a dynamic model capable of predicting batch job failures. A recurrent neural network (RNN)/LSTM model is trained using time series data to predict failures for queued batch jobs. [0118] [0120-0121] discloses predicting potential batch job failures using AI learning techniques, such as RNN/LSTM models. An RNN/LSTM model is trained by inputting the time series data into the RNN/LSTM model. The model predicts success or potential failure of batch job. Thus, RNN/LSTM models are deep neural networks with multiples layers and branches that process historical execution-related data and resource utilization data to predict execution outcomes (predicted failure or success). Saha: [0042-0045] discloses using predictive model to predict/estimate run times of automated tasks. The predictive model comprises multitude of decision trees that may output mean prediction (i.e., regression) of the individual trees to generate an estimate of task run time, the greater number of decision trees may allow for a more accurate prediction. Saha describes a predictive model that uses a historical task database for prior execution times and associated factors (including resource utilization level) to generate predicted run times, implying multi-output predictive modeling framework where different outputs may be produced from shared inputs.).
As per claim 3, the combination of Mahamuni and Saha discloses The computer-implemented method of claim 2 [See rejection to claim 2 above], wherein the first of the multiple branches of the at least one deep neural network comprises a classifier (e.g. Mahamuni: [0118] [0120-0121] discloses predicting potential batch job failures using AI learning techniques, such as RNN/LSTM models. An RNN/LSTM model is trained by inputting the time series data into the RNN/LSTM model. The model predicts success or potential failure of batch job [classifier]. Thus, RNN/LSTM models are deep neural networks with multiples layers and branches that process historical execution-related data and resource utilization data to predict execution outcomes (predicted failure or success). [0085] discloses predicting future batch job failures. The knowledge base comprises a history of successful and unsuccessful batch jobs.).
As per claim 4, the combination of Mahamuni and Saha discloses The computer-implemented method of claim 2 [See rejection to claim 2 above], wherein estimating one or more temporal durations comprises processing the at least a portion of the historical job execution-related data and the at least a portion of the resource utilization-related data using at least one deep neural network comprising multiple branches of network for multiple types of outputs, wherein a second of the multiple branches is associated with estimating the one or more temporal durations associated with executing the one or more pending batch jobs (e.g. Mahamuni: [0022] discloses using AI or machine learning techniques to analyze and predict batch job failures of batch jobs in queue. Embodiments may input data into an AI engine and use continuous or active learning to fine tune a dynamic model capable of predicting batch job failures. A recurrent neural network (RNN)/LSTM model is trained using time series data to predict failures for queued batch jobs. [0118] [0120-0121] discloses predicting potential batch job failures using AI learning techniques, such as RNN/LSTM models. An RNN/LSTM model is trained by inputting the time series data into the RNN/LSTM model. The model predicts success or potential failure of batch job. Thus, RNN/LSTM models are deep neural networks with multiples layers and branches that process historical execution-related data and resource utilization data to predict execution outcomes (predicted failure or success) [0082] [0088] discloses predicting batch job failures. [0020] furthermore, Mahamuni also implies estimating average time for the job to be successful by crating records comprising time series data that includes a history of successful batch jobs and messages associated therewith, time stamps and average time for the job to be successful. Thus, Mahamuni estimates/predicts multiple type of outputs such as batch job failures/success and average completion time of the batch job. Saha: [Abstract] [0008] [0039] [0045] discloses generating a time estimate for incoming or pending automated task based on historical data using predictive model. [0043-0045] discloses predictive model to generate time estimate for automated task using historical data. The historical task database may be used to fit a multitude of decision trees as part of the training process may output mean prediction (i.e., regression) of the individual trees to generate an estimate of the estimate task run time. Thus, Mahamuni provides the deep neural network framework (RNN/LSTM model) over historical job histories and metrics, with average completion times include in the data, and implies that the same model can support multiple outputs such as failure prediction and average execution time. Saha explicitly describes using predictive model to predict/estimate run time or execution time of the pending task which is a regression output.).
As per claim 5, the combination of Mahamuni and Saha discloses The computer-implemented method of claim 4 [See rejection to claim 4 above], wherein the second of the multiple branches of the at least one deep neural network comprises a regressor (e.g. Saha: [0043] discloses training process may output mean prediction (regression) of the individual decision trees to generate an estimate of the task run time.).
As per claim 6, the combination of Mahamuni and Saha discloses The computer-implemented method of claim 1 [See rejection to claim 1 above], wherein performing one or more automated actions comprises automatically scheduling at least a portion of the one or more pending batch jobs to be executed at one or more given times based at least in part on the one or more predicted execution outcomes and the one or more estimated temporal durations (e.g. Mahamuni: [Abstract] [0003] discloses applying recommended remediation action for batch jobs queued for processing at a later point in time based on the predictions and root cause analysis. Also see [0020-0021] discloses based on historical data and collected metric, batch jobs in the job queue can be analyzed and predictions can be made whether or not the queued jobs are expected to fail. Potential batch job failures can be proactively flagged, and remediation steps can be automatically implemented to alleviate the potential source of the anticipated failure. Appropriate action on the batch job expected to fail can include terminating the entire batch job, restarting a failing batch job, holding batch job execution, fixing the failing process and allowing the batch job to run thereafter. [0088-0090] discloses predicting batch job failures of queued batch jobs and predicting remedial actions to alleviate predicted batch job failures. The reasoning engine may process the facts in the fact database and rules of the knowledge base, then use both sets of information to reach one or more conclusions and implement an action. Also see [Figs. 6-7 and related description]. Saha: [Abstract] discloses facilitating run time predictions for cloud-computing automated tasks, and using the prediction run time to schedule resource locking for the tasks. Resource lock schedules may be determined for a queue of automated tasks utilizing the resource based on predicted run time for automated tasks. The predicted run time may be used to reserve a resource for the given duration to execute the respective automated task. [0008] discloses predictive model may be used to predict run times for automated tasks. The system and methods utilize the predicted run time to reserve a resource for the given time period for scheduling a given automated task. Combining Mahamuni’s outcome-based decisions with Saha’s duration based scheduling yields automated scheduling of pending batch jobs at specific times based on both the predicted execution outcomes and estimated execution/run times.).
As per claim 7, the combination of Mahamuni and Saha discloses The computer-implemented method of claim 1 [See rejection to claim 1 above], wherein performing one or more automated actions comprises automatically executing at least a portion of one or more pending batch jobs at one or more given times based at least in part on the one or more predicted execution outcomes and the one or more estimated temporal durations (e.g. Mahamuni: [Figs. 6A, 6B, 7 and related description] Mahamuni shows that batch jobs are run as scheduled by the scheduler and the predictive analytics and suggested actions can alter whether and when certain jobs or processes are executed. [Abstract] [0003] discloses applying recommended remediation action for batch jobs queued for processing at a later point in time based on the predictions and root cause analysis. Also see [0020-0021] discloses based on historical data and collected metric, batch jobs in the job queue can be analyzed and predictions can be made whether or not the queued jobs are expected to fail. Potential batch job failures can be proactively flagged, and remediation steps can be automatically implemented to alleviate the potential source of the anticipated failure. Appropriate action on the batch job expected to fail can include terminating the entire batch job, restarting a failing batch job, holding batch job execution, fixing the failing process and allowing the batch job to run thereafter. [0088-0090] discloses predicting batch job failures of queued batch jobs and predicting remedial actions to alleviate predicted batch job failures. The reasoning engine may process the facts in the fact database and rules of the knowledge base, then use both sets of information to reach one or more conclusions and implement an action. Also see [Figs. 6-7 and related description]. Saha: [Figs. 4-5 and related description] executes automated tasks according to schedules derived from predicted run times: upon receiving a request, the system locks the resource based on the predicted runtime and then executes the automated task during that period. [Abstract] discloses facilitating run time predictions for cloud-computing automated tasks, and using the prediction run time to schedule resource locking for the tasks. Resource lock schedules may be determined for a queue of automated tasks utilizing the resource based on predicted run time for automated tasks. The predicted run time may be used to reserve a resource for the given duration to execute the respective automated task. [0008] discloses predictive model may be used to predict run times for automated tasks. The system and methods utilize the predicted run time to reserve a resource for the given time period for scheduling a given automated task. In combination, the system automatically initiates execution of selected pending batch jobs at specific times determined by both their predicted failure and predicted/estimated run time.).
As per claim 8, the combination of Mahamuni and Saha discloses The computer-implemented method of claim 1 [See rejection to claim 1 above], wherein performing one or more automated actions comprises automatically training at least a portion of the one or more artificial intelligence techniques based at least in part on feedback to one or more of the one or more predicted execution outcomes and the one or more estimated temporal durations (e.g. Mahamuni: [0022] discloses inputting data into an AI engine and use continuous or active learning to fine tune a dynamic model capable of predicting batch job failures. Time series data for the known tasks can be created, and used in turn to train a recurrent neural network model capable of predicting failures for a given or queued batch jobs. [0095] discloses neural network model can be trained by the AI engine using time series data. [0099] discloses continuously training model using the historical data collected by the batch management module and may continuously be improved over time from active and continuous feedback. Also see [0097][0120] Saha: [0041-0043] discloses providing input to a predictive model as part of training the predictive model and improving a store of historical task database accessed by the predictive model. The historical task database may be used to fit a multitude of decision trees as part of the training process that may output mean prediction to generate an estimate of the task run time. The greater number of decision trees may allow for a more accurate prediction. Thus, the model learns to adjust its run time predictions when actual durations differ from predicted ones. The combination supports automatically training/updating the AI model based on feedback from observed execution outcomes (success/failure) and observed predicted and actual run times.).
As per claim 9, the combination of Mahamuni and Saha discloses The computer-implemented method of claim 1 [See rejection to claim 1 above], wherein obtaining historical job execution-related data comprises obtaining one or more of data pertaining to historical job execution outcomes and data pertaining to processing time data of one or more types of jobs (e.g. Mahamuni: [0020] discloses creating knowledge bases containing records comprising time series data that includes a history of successful and unsuccessful/failed batch jobs, messages associated therewith, time stamps, average time for the job to be successful and error messages for historically identified failed jobs. Also see [0078-0080] [0083] [0109-0110]. Saha: [0039] discloses historical task database populated with data or records corresponding to prior automated tasks, the actual run or execution times for the respective automated task and one or more factors that may be used to characterize a given run or execution of an automated task that may impact a run time of the respective automated task. Also see [0043-0045] [0049].).
As per claim 10, the combination of Mahamuni and Saha discloses The computer-implemented method of claim 1 [See rejection to claim 1 above], wherein obtaining utilization-related data for one or more pending batch jobs comprises obtaining one or more of central processing unit data, memory data, storage utilization data, input-output information, host infrastructure availability information, information pertaining to at least one of load, volume, and seasonality, date and time information, and job name information (e.g. Mahamuni: [0078-0079] discloses metric module collects system metric for the mainframe during the execution of the batch job, time series data tracking system logs, messages with relative time stamps, process level information describing process invoked during the job steps. [0081] The metrics module may gather data of the mainframe during the processing in order to assess the overall health of the mainframe. The mainframe responsible for processing the batch jobs may deploy a number of resources, such as CPU, I/O storage and networks that work collectively to process a batch job. In order to asses the system’s overall health, data for system reports can be gathered for these resources. Example of performance metrics that can be gathered may include average throughput, average response time, resource utilization (i.e., CPU utilization, storage utilization, I/O rates, paging rates, etc.) and resource velocity. Also see [0083] [0085]. Saha: [0039] [0049] discloses storing information about the automated task, its execution and other factors in the historical task database. The data store populated with data or records corresponding to prior automated tasks and one or more factors that may be used to characterize a given run or execution of an automated task. Example of such factors may include resource or resource upon which the respective task was performed, a time of day and/or week, network conditions or characteristics (e.g., bandwidth, network speed, latency and so forth), available computing resources, and so forth.). Also see [0047] [Claim 40].).
As per claims 11, 12, 13, 14 and 15, these are non-transitory processor-readable storage medium claims having similar limitations as cited in method claims 1, 2, 4, 6 and 7, respectively. Thus, claims 11, 12, 13, 14 and 15 are also rejected under the same rationale as cited in the rejection of rejected claims 1, 2, 4, 6 and 7, respectively.
As per claims 16, 17, 18, 19 and 20, these are apparatus/system claims having similar limitations as cited in method claims 1, 2, 4, 6 and 7, respectively. Thus, claims 16, 17, 18, 19 and 20 are also rejected under the same rationale as cited in the rejection of rejected claims 1, 2, 4, 6 and 7, respectively.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hiren Patel whose telephone number is (571) 270-3366. The examiner can normally be reached on Monday-Friday 9:30 AM to 6:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) Form at https://www.uspto.gov/patents/uspto-automated- interview-request-air-form.
If attempts to reach the above noted Examiner by telephone are unsuccessful, the Examiner’s supervisor, April Y. Blair, can be reached at the following telephone number: (571) 270-1014. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from Patent Center and the Private Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from Patent Center or Private PAIR. Status information for unpublished applications is available through Patent Center or Private PAIR to authorized users only. Should you have questions on access to Patent Center or the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
January 7, 2026
/HIREN P PATEL/Primary Examiner, Art Unit 2196