Prosecution Insights
Last updated: April 19, 2026
Application No. 18/132,914

PERFORMING AUTOMATED TICKET CLASSIFICATION

Final Rejection §103
Filed
Apr 10, 2023
Examiner
MOBIN, HASANUL
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Oracle International Corporation
OA Round
2 (Final)
75%
Grant Probability
Favorable
3-4
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 75% — above average
75%
Career Allow Rate
506 granted / 675 resolved
+20.0% vs TC avg
Strong +39% interview lift
Without
With
+39.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
16 currently pending
Career history
691
Total Applications
across all art units

Statute-Specific Performance

§101
17.0%
-23.0% vs TC avg
§103
53.3%
+13.3% vs TC avg
§102
10.9%
-29.1% vs TC avg
§112
9.8%
-30.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 675 resolved cases

Office Action

§103
DETAILED ACTION Remarks This communication is in response to the amendment/arguments filed on March 2, 2026 has been fully considered. The rejection is made final. Claims 1-20 are pending for examination. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Examiner Notes Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. The examiner requests, in response to this Office action, supports are shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application. When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections See 37 CFR 1.111(c). Response to Amendment Applicant’s arguments/amendment filed on March 2, 2026, with respect to the rejection(s) of amended claim(s) 1-20 under 35 U.S.C. § 112 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. Objection to claim 2 imposed in previous office action has been withdrawn because of the amendment to the claim. Response to Arguments Applicant’s arguments with respect to claim(s) March 2, 2026 have been considered but are moot in view of Kuchibhotla et al. (US Patent Publication No. 2009/0106180 A1). In response to Applicant’s argument on pages 8-9 that Earhart does not teach “retrieving, by the computer system, system performance data that comprises one or more performance indicator metrics determined during the time window”, and “determining, by the computer system, system health data during the time of the current incident, wherein the system health data includes one or more of a usage of the component, a number of service tickets issued within a predetermined time period for the component” is acknowledged but not deemed to be persuasive. Earhart [0038-0041] discloses updated time-period generated by the ticket manager component. … The issue element of ticket manager component is system performance and/or user-interaction with the system. … Infrastructure elements of ticket manager component can include infrastructure components and/or user(s) utilizing or attempting to utilize those infrastructure components during a given time-period, such as the impact time-period. The impact time-period is a period of time surrounding the time at which the reported issue occurred. Earhart [0038-0041] updated time-period generated by the ticket manager component. … The issue element of ticket manager component is system performance and/or user-interaction with the system. … Infrastructure elements of ticket manager component can include infrastructure components and/or user(s) utilizing or attempting to utilize those infrastructure components during a given time-period, such as the impact time-period. Infrastructure components can include both hardware components and software, such as applications (i.e., usage of the component). Earhart [0038] discloses the impact time-period is a period of time surrounding the time at which the reported issue occurred. The impact time-period includes a start time and an end time. The impact time-period can be a time-period identified by the user in an incident ticket … updated time-period generated by the ticket manager component (i.e., service ticket issued within a time period). Therefore, Earhart discloses the above argued limitation of claim 1. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 6-12 and 16-19 are rejected under 35 U.S.C. 103 as being unpatentable over Meng et al. (US Patent No. 11,270,226 B2, ‘Meng’, hereafter, provided by the IDS) in view of Earhart et al. (US Patent Publication No. 2023/0362071 A1, ‘Earhart’, hereafter) and further in view of Kuchibhotla et al. (US Patent Publication No. 2009/0106180 A1, ‘Kuchibhotla’, hereafter). 3 Regarding claim 1. Meng teaches a computer-implemented method, comprising: identifying, by a computer system, an unlabeled ticket generated in response to an occurrence of a current incident associated with a component (Meng discloses identifying, by a computer system, generating an unlabeled ticket, “tickets being generated by a wide variety of sources that include individual hardware devices, software applications, and human users of the system and the ticket generating system generates tickets”, Meng, Col 1, lines 17-20 and Col 2, lines 45-50 and Fig. 1.”For those tickets that do not have an above-threshold confidence for any label, a second step uses clustering to group tickets together according to, e.g., semantic similarities. This process can identify an appropriate label for an unlabeled ticket according to labeled tickets that it has been grouped together with (i.e., identifying, an unlabeled ticket generated). The process can also identify tickets and groups of tickets that do not match any label and represent a new type or format of ticket”, Meng, Col 3, lines 22-26); Meng does not teach determining, by the computer system, a time window starting at a predetermined time before the occurrence of the current incident and ending at a time after the occurrence of the current incident; retrieving, by the computer system, system performance data that comprises one or more performance indicator metrics determined during the time window; determining and labeling, by the computer system, anomalies within the system performance data to create labeled system performance data; determining, by the computer system, system health data during the time of the current incident; determining, by the computer system utilizing a trained machine learning model, a label for the unlabeled ticket, utilizing the unlabeled ticket, the labeled system performance data, and the system health data. However, Earhart teaches determining, by the computer system, a time window starting at a predetermined time before the occurrence of the current incident and ending at a time after the occurrence of the current incident (The impact time-period is a period of time surrounding the time at which the reported issue occurred. The impact time-period includes a start time and an end time. The impact time-period can be a time-period identified by the user 114 in an incident ticket … updated time-period generated by the ticket manager component, Earhart [0038]); retrieving, by the computer system, system performance data that comprises one or more performance indicator metrics determined during the time window (Earhart discloses retrieving, by the computer system, system performance data determined during the time window “updated time-period generated by the ticket manager component. … The issue element of ticket manager component is system performance and/or user-interaction with the system. … Infrastructure elements of ticket manager component can include infrastructure components and/or user(s) utilizing or attempting to utilize those infrastructure components during a given time-period, such as the impact time-period. Infrastructure components can include both hardware components and software, such as applications””, Earhart [0038-0041]. An analysis component analyzes the real-time monitor data during the impact time-period and/or the updated impact time-period to identify anomalous data. Anomalous data is data indicative of an infrastructure component failing to operate/function as expected, deviating from normal operational standards, or otherwise indicating a change in performance”, Earhart [0050]. “Relevant data associated with one or more fields of interest are extracted and further analyzed. The analysis results are utilized by the system to determine/predict impacted infrastructure, time frames and suggests anomalous metrics and events using key pieces of data from the incident tickets. The system utilizes potential key performance indicators that indicated the event producing predicted infrastructure, customers, programs and/or time frames”, Earhart [0080]); determining and labeling, by the computer system, anomalies within the system performance data to create labeled system performance data (The issue element of ticket manager component is system performance and/or user-interaction with the system, Earhart [00 39]. An analysis component analyzes the real-time monitor data during the impact time-period and/or the updated impact time-period to identify anomalous data. Anomalous data is data indicative of an infrastructure component failing to operate/function as expected, deviating from normal operational standards, or otherwise indicating a change in performance. … anomalous data can include any type of log data, error messages, metric data, failover data, and/or other performance data indicating an issue with system components, Earhart [0050-0052]); determining, by the computer system, system health data during the time of the current incident, wherein the system health data includes one or more of a usage of the component, a number of service tickets issued within a predetermined time period for the component (Earhart discloses system health data during the time of the current incident “An analysis component analyzes the real-time monitor data during the impact time-period and/or the updated impact time-period to identify anomalous data. Anomalous data is data indicative of an infrastructure component failing to operate/function as expected, deviating from normal operational standards, or otherwise indicating a change in performance”, Earhart [0050-0052]. “Relevant data associated with one or more fields of interest are extracted and further analyzed. The analysis results are utilized by the system to determine/predict impacted infrastructure, time frames and suggests anomalous metrics and events using key pieces of data from the incident tickets. The system utilizes potential key performance indicators that indicated the event producing predicted infrastructure, customers, programs and/or time frames”, Earhart [0080]. “updated time-period generated by the ticket manager component. … The issue element of ticket manager component is system performance and/or user-interaction with the system. … Infrastructure elements of ticket manager component can include infrastructure components and/or user(s) utilizing or attempting to utilize those infrastructure components during a given time-period, such as the impact time-period. Infrastructure components can include both hardware components and software, such as applications” (i.e., usage of the component), Earhart [0038-0041]. “The impact time-period is a period of time surrounding the time at which the reported issue occurred. The impact time-period includes a start time and an end time. The impact time-period can be a time-period identified by the user 114 in an incident ticket … updated time-period generated by the ticket manager component” (i.e., service ticket issued within a time period), Earhart [0038]); determining, by the computer system utilizing a trained machine learning model (by utilizing a machine learning prediction model to analyze available real-time data identify potentially impacted infrastructure components, software, and users likely to be affected by an issue reported in an incident ticket, Earhart [0018]. The ticket manager component includes an impact model utilizing a machine learning component. The machine learning component may include pattern recognition, modeling, or other machine learning algorithms to analyze impact data and/or labeled incident tickets, Earhart [0029]), a label for the unlabeled ticket, utilizing the unlabeled ticket, the labeled system performance data, and the system health data (the updated incident tickets are provided by to the impact model as training data. The updated incident tickets are used as training data and feedback to further improve the accuracy and reliability (i.e., system health and system performance) of the updated/labeled incident tickets generated by the system (i.e., label for the unlabel incident ticket). Furthermore, Earhart discloses retrieving, by the computer system, system performance data determined during the time window “updated time-period generated by the ticket manager component. … The issue element of ticket manager component is system performance and/or user-interaction with the system. … Infrastructure elements of ticket manager component can include infrastructure components and/or user(s) utilizing or attempting to utilize those infrastructure components during a given time-period, such as the impact time-period. Infrastructure components can include both hardware components and software, such as applications””, Earhart [0038-0041]. An analysis component analyzes the real-time monitor data during the impact time-period and/or the updated impact time-period to identify anomalous data. Anomalous data is data indicative of an infrastructure component failing to operate/function as expected, deviating from normal operational standards, or otherwise indicating a change in performance”, Earhart [0050]. As more incident tickets are received and handled by the system, the accuracy of the system predictions become more accurate. This improves the quality of incident ticket data and permits quicker customer response by reducing both customer response time and root cause analysis time, Earhart [0019]. The ticket manager component updates the incident tickets created by users (i.e., initial incident ticket created by the user consider as unlabel incident ticket) with the additional predicted impact data. The updated incident tickets including additional details are referred to as labeled incident tickets (i.e., utilizing machine learning label the unlabeled incident ticket to labeled incident ticket), Earhart [0036]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Meng and Earhart before him/her, to modify Meng with the teaching of Earhart’s impact predictions based on incident-related data. One would have been motivated to do so for the benefit of efficiently creating and automating the creation of incident ticket in order to have accurate information in the ticket (Earhart Abstract, [0002])., Meng and Earhart do not explicitly teach a number of alarms triggered for the component. However, Kuchibhotla teaches a number of alarms triggered for the component (Kuchibhotla discloses a number of alarms triggered for the component “as depicted in FIG. 2, the processing for determining and outputting a status value for a system health meter may be initiated in response to a user request to compute the system health meter via a user interface. The processing may also be initiated in response to various other events and/or conditions detected in the monitored system such as creation of a new incident in response to an error or failure in the monitored system, etc. Likewise, the processing for determining a status value for a component may be triggered in response to various different events. For example, the component status value processing may be initiated upon receiving a signal to determine a status value for the system health meter, Kuchibhotla [0056]. Using these rules, depending on the number of CRITICAL and WARNING issues impacting the component, the status value for the component may be computed accordingly” (i.e., number of alarms triggered by the component), Kuchibhotla [0072]. … “a component affecting the system health meter may have its own component health meter.” … “the processing for determining a status value for a component may be triggered in response to different events … the status value determined for the incident meter is a value selected from a set of pre-configured values, such as Warning, Critical, and Normal”, Kuchibhotla [0091-0092]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Meng, Earhart and Kuchibhotla before him/her, to further modify Meng with the teaching of Kuchibhotla’s system health meter. One would have been motivated to do so for the benefit of efficiently indicating the status or health of a software system in a simple and summarized manner (Kuchibhotla, Abstract). Regarding claim 2. Meng as modified teaches, wherein a post-incident portion of the time window includes an entire time period occurring between the occurrence of the current incident and a delivery of the unlabeled ticket to a destination (Earhart [0030], [0034], [0050]). Regarding claim 6. The computer-implemented method of claim 1, wherein retrieving, by the computer system, the system performance data determined during the time window includes: identifying, by the computer system, a predetermined system component associated with the current incident (Earhart [0017], [0027], [0050], [0057]); determining, by the computer system, one or more performance metrics for the predetermined system component (Earhart [0017], [0027], [0050], [0057]); identifying, by the computer system, one or more additional system components correlated to the predetermined system component (Earhart [0027]); and determining, by the computer system, one or more performance metrics for the additional system components (Earhart [0027]). Regarding claim 7. Meng as modified teaches, wherein a correlation between the predetermined system component and the one or more additional system components is defined by one or more models (Earhart [0027]). Regarding claim 8. Meng as modified teaches, wherein a correlation between the predetermined system component and the one or more additional system components is determined dynamically utilizing another trained machine learning model (Earhart [0031], [0033], [0053-0057]). Regarding claim 9. Meng as modified teaches, further comprising creating, by the computer system, a feature matrix utilizing the labeled system performance data, wherein the feature matrix is provided as input into the trained machine learning model (Earhart [0027], [0050], [0069]). Regarding claim 10. Meng as modified teaches, wherein the feature matrix includes a data matrix that includes a plurality of system performance metrics and an indication as to whether each system performance metric contains one or more anomalies (Earhart [0051-0052]). Regarding claim 11. Meng teaches a system comprising: one or more processors (The system 104 includes a hardware processor 502 and memory 504 … The ticket classification/response system 104 further includes one or more functional modules that may, in some embodiments, be implemented as software that is stored in memory 504 and is executed by hardware processor 502, Meng, Col 9, lines 22-46, The processing system 600 includes at least one processor (CPU) 604, Col 10, lines 22-31 and Figs. 5-6) configured to: although claim 11 directed to a system, it is similar in scope to claim 1. The method steps of claim 1 substantially encompass the system recited in claim 11. Therefore; claim 11 is rejected for at least the same reason as claim 1 above. Regarding claims 12 and 16-19, the method steps of claims 2 and 6-9 substantially encompass the system recited in claims 12 and 16-19. Therefore, claims 12 and 16-19 are rejected for at least the same reason as claims 2 and 6-9 above. Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Meng et al. (US Patent No. 11,270,226 B2, ‘Meng’, hereafter, provided by the IDS) in view of Earhart et al. (US Patent Publication No. 2023/0362071 A1, ‘Earhart’, hereafter) in view of Kuchibhotla et al. (US Patent Publication No. 2009/0106180 A1, ‘Kuchibhotla’, hereafter) and further in view of Ijidakinro et al. (US Patent Publication No. 2008/0056233 A1, ‘Ijidakinro’, hereafter). Regarding claim 3. Meng as modified teaches, wherein a post-incident portion of the time window includes an entire time period occurring between the occurrence of the current incident (Earhart [0038], [0084]) and But Meng, Earhart and Kuchibhotla explicitly do not teach a removal of the unlabeled ticket from an analysis queue. However, Ijidakinro teaches a removal of the unlabeled ticket from an analysis queue (A customer support incident ticket is removed from the support queue 320 when it is assigned to a customer support agent, Ijidakinro [0029], [0059], [0065]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Meng, Earhart, Kuchibhotla and Ijidakinro before him/her, to further modify Meng with the teaching of Ijidakinro’s incident support routing. One would have been motivated to do so for the benefit of creating and ticket based on data gathering from customer and routing to the appropriate customer support agents who may be remotely located and geographically distributed (Ijidakinro, Abstract, [0003], [0019]). Regarding claim 13, the method steps of claim 3 substantially encompass the system recited in claim 13. Therefore, claim 13 is rejected for at least the same reason as claim 4 above. Claims 4, 5, 14 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Meng et al. (US Patent No. 11,270,226 B2, ‘Meng’, hereafter, provided by the IDS) in view of Earhart et al. (US Patent Publication No. 2023/0362071 A1, ‘Earhart’, hereafter) in view of Kuchibhotla et al. (US Patent Publication No. 2009/0106180 A1, ‘Kuchibhotla’, hereafter) and further in view of Wu et al. (Chinese Patent Publication No. CN 110880709 A, ‘Wu’, hereafter). Regarding claim 4. Meng, Earhart and Kuchibhotla do not teach, wherein a length of a post- incident portion of the time window is calculated based on an average time taken to address historical tickets in an analysis queue at a time the unlabeled ticket was generated. However, Wu teaches wherein a length of a post- incident portion of the time window is calculated based on an average time taken to address historical tickets in an analysis queue at a time the unlabeled ticket was generated (Calculate the average of the remaining time data after screening. Sum the average time of each operation item to calculate the total time from the start of ice melting work to the start of the line. … The time node of each operation item is obtained on the historical operation ticket, and then the operation duration of each item is calculated, and the normal duration data is filtered. According to the process of ice melting, calculate when to dispatch workers, so that the ice watchers will start the ice watch work when they arrive at the ice watch site, to avoid the ice watchers waiting ineffective in the harsh climate environment, Wu, page 2, lines 38-40, 45-50 and Page 4, lines 14-16). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Meng, Earhart, Kuchibhotla and Wu before him/her, to further modify Meng with the teaching of Wu’s method for determining ice observation and dispatching time of power transmission line. One would have been motivated to do so for the benefit of acquiring the time node data of the operation items in a plurality of historical operation tickets and calculating total and average time (Wu, Abstract). Regarding claim 5. Meng as modified teaches, wherein a length of a post- incident portion of the time window is calculated by averaging a post-incident portion of historical time windows for historical incidents used to train the trained machine learning model (Wu, page 2, lines 38-40, 45-50 and Page 4, lines 14-16). Regarding claims 14 and 15, the method steps of claims 4 and 5 substantially encompass the system recited in claims 14 and 15. Therefore, claims 14 and 15 are rejected for at least the same reason as claims 4 and 5 above. Claim 20 are rejected under 35 U.S.C. 103 as being unpatentable over Savir et al. (US Patent Publication No. 2020/0242623 A1, ‘Savir’, hereafter) in view of Wu et al. (Chinese Patent Publication No. CN 110880709 A, ‘Wu’, hereafter) and further in view of Earhart et al. (US Patent Publication No. 2023/0362071 A1, ‘Earhart’, hereafter) and further in view of Kuchibhotla et al. (US Patent Publication No. 2009/0106180 A1, ‘Kuchibhotla’, hereafter). Regarding claim 20. Savir teaches a computer-implemented method, comprising: identifying, by a computer system, an historical ticket generated in response to an occurrence of an historical incident within a system (Savir discloses identifying, by a computer system, an historical ticket generated in response to an occurrence of an historical incident within a system “the exemplary customer support ticket aggregation system is a content-based recommender system that, given a new user ticket (e.g., a customer support incident), will search for similar issues (tickets) that have been processed by the system and that are likely to be related (i.e., historical ticket). Recommendations are made by retrieving ‘candidate’ tickets from a designated knowledge base and by using machine-learning algorithms that recommend customer support ticket(s) that are most relevant to the incoming issue in terms of a root cause analysis investigation”, Savir [0025]. “The ticket fingerprint is processed by a machine learning similarity model, discussed further below in conjunction with FIG. 4, that identifies one or more related customer service tickets (e.g., historical tickets and tickets being processed in real-time)”, Savir [0028]); identifying, by the computer system, a label assigned to the historical ticket (Savir discloses identifying, by the computer system, a label assigned to the historical ticket “As shown in FIG. 1, the exemplary customer support ticket aggregation system comprises a topic extractor that processes one or more in-process customer service tickets and a corpus of processed customer support tickets (i.e., historical ticket), and generates a ticket fingerprint, as discussed further below in conjunction with FIG. 3. In one or more embodiments, the exemplary corpus of processed customer support tickets comprises processed customer support tickets that are labeled with the corresponding root cause identified by the root cause analysis”, Savir [0026]. “The ticket fingerprint is processed by a machine learning similarity model, discussed further below in conjunction with FIG. 4, that identifies one or more related customer service tickets (e.g., historical tickets and tickets being processed in real-time)”, Savir [0028]); training, by the computer system, a machine learning model to determine the label assigned to the historical ticket, utilizing training data including the historical ticket, the label assigned to the historical ticket (the topic model distinguishes different words used to describe related incidents and is trained over the collection of prior customer support tickets (i.e., historical ticket) In at least one embodiment, the machine learning similarity model comprises at least two Siamese networks that determine whether at least two applied fingerprints of customer support tickets are related, based on the predefined similarity metric, Savir [0005], [0030-0031], [0038]. The exemplary corpus of processed customer support tickets 140 comprises processed customer support tickets that are labeled with the corresponding root cause identified by the root cause analysis, Savir [0026]), But Savir explicitly do not teach retrieving, by the computer system, system performance data that comprises one or more performance indicator metrics determined during the time window; determining and labeling, by the computer system, anomalies within the system performance data to create labeled system performance data; determining, by the computer system, health data for the system during the time of the historical incident, wherein the system health data includes one or more of a usage of a component, a number of service tickets issued within a predetermined time period for the component, and a number of alarms triggered for the component; However, Earhart teaches retrieving, by the computer system, system performance data that comprises one or more performance indicator metrics determined during the time window (Earhart discloses retrieving, by the computer system, system performance data determined during the time window “updated time-period generated by the ticket manager component. … The issue element of ticket manager component is system performance and/or user-interaction with the system. … Infrastructure elements of ticket manager component can include infrastructure components and/or user(s) utilizing or attempting to utilize those infrastructure components during a given time-period, such as the impact time-period. Infrastructure components can include both hardware components and software, such as applications””, Earhart [0038-0041]. An analysis component analyzes the real-time monitor data during the impact time-period and/or the updated impact time-period to identify anomalous data. Anomalous data is data indicative of an infrastructure component failing to operate/function as expected, deviating from normal operational standards, or otherwise indicating a change in performance”, Earhart [0050]. “Relevant data associated with one or more fields of interest are extracted and further analyzed. The analysis results are utilized by the system to determine/predict impacted infrastructure, time frames and suggests anomalous metrics and events using key pieces of data from the incident tickets. The system utilizes potential key performance indicators that indicated the event producing predicted infrastructure, customers, programs and/or time frames”, Earhart [0080]); determining and labeling, by the computer system, anomalies within the system performance data to create labeled system performance data (The issue element of ticket manager component is system performance and/or user-interaction with the system, Earhart [00 39]. An analysis component analyzes the real-time monitor data during the impact time-period and/or the updated impact time-period to identify anomalous data. Anomalous data is data indicative of an infrastructure component failing to operate/function as expected, deviating from normal operational standards, or otherwise indicating a change in performance. … anomalous data can include any type of log data, error messages, metric data, failover data, and/or other performance data indicating an issue with system components, Earhart [0050-0052]); determining, by the computer system, health data for the system during the time of the historical incident, wherein the system health data includes one or more of a usage of a component, a number of service tickets issued within a predetermined time period for the component (Earhart discloses system health data during the time of the current incident “An analysis component analyzes the real-time monitor data during the impact time-period and/or the updated impact time-period to identify anomalous data. Anomalous data is data indicative of an infrastructure component failing to operate/function as expected, deviating from normal operational standards, or otherwise indicating a change in performance”, Earhart [0050-0052]. “Relevant data associated with one or more fields of interest are extracted and further analyzed. The analysis results are utilized by the system to determine/predict impacted infrastructure, time frames and suggests anomalous metrics and events using key pieces of data from the incident tickets. The system utilizes potential key performance indicators that indicated the event producing predicted infrastructure, customers, programs and/or time frames”, Earhart [0080]. “updated time-period generated by the ticket manager component. … The issue element of ticket manager component is system performance and/or user-interaction with the system. … Infrastructure elements of ticket manager component can include infrastructure components and/or user(s) utilizing or attempting to utilize those infrastructure components during a given time-period, such as the impact time-period. Infrastructure components can include both hardware components and software, such as applications” (i.e., usage of the component), Earhart [0038-0041]. “The impact time-period is a period of time surrounding the time at which the reported issue occurred. The impact time-period includes a start time and an end time. The impact time-period can be a time-period identified by the user 114 in an incident ticket … updated time-period generated by the ticket manager component” (i.e., service ticket issued within a time period), Earhart [0038]); and the health data for the system during the time of the historical incident, and the labeled system performance data (Earhart discloses the health data for the system during the time of the historical incident, and the labeled system performance data “An analysis component analyzes the real-time monitor data during the impact time-period and/or the updated impact time-period to identify anomalous data. Anomalous data is data indicative of an infrastructure component failing to operate/function as expected, deviating from normal operational standards, or otherwise indicating a change in performance”, Earhart [0050-0052]. “Relevant data associated with one or more fields of interest are extracted and further analyzed. The analysis results are utilized by the system to determine/predict impacted infrastructure, time frames and suggests anomalous metrics and events using key pieces of data from the incident tickets. The system utilizes potential key performance indicators that indicated the event producing predicted infrastructure, customers, programs and/or time frames”, Earhart [0080]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Savir and Earhart before him/her, to modify Savir with the teaching of Earhart’s impact predictions based on incident-related data. One would have been motivated to do so for the benefit of efficiently creating and automating the creation of incident ticket in order to have accurate information in the ticket (Earhart Abstract, [0002]). Savir and Earhart do not explicitly teach a number of alarms triggered for the component. However, Kuchibhotla teaches a number of alarms triggered for the component (Kuchibhotla discloses a number of alarms triggered for the component “as depicted in FIG. 2, the processing for determining and outputting a status value for a system health meter may be initiated in response to a user request to compute the system health meter via a user interface. The processing may also be initiated in response to various other events and/or conditions detected in the monitored system such as creation of a new incident in response to an error or failure in the monitored system, etc. Likewise, the processing for determining a status value for a component may be triggered in response to various different events. For example, the component status value processing may be initiated upon receiving a signal to determine a status value for the system health meter, Kuchibhotla [0056]. Using these rules, depending on the number of CRITICAL and WARNING issues impacting the component, the status value for the component may be computed accordingly” (i.e., number of alarms triggered by the component), Kuchibhotla [0072]. … “a component affecting the system health meter may have its own component health meter.” … “the processing for determining a status value for a component may be triggered in response to different events … the status value determined for the incident meter is a value selected from a set of pre-configured values, such as Warning, Critical, and Normal”, Kuchibhotla [0091-0092]). Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Savir, Earhart and Kuchibhotla before him/her, to further modify Savir with the teaching of Kuchibhotla’s system health meter. One would have been motivated to do so for the benefit of efficiently indicating the status or health of a software system in a simple and summarized manner (Kuchibhotla, Abstract). Savir, Earhart and Kuchibhotla do not teach determining, by the computer system, a time window starting at a predetermined time before the occurrence of the historical incident and ending at a time after the occurrence of the historical incident; However, Wu teaches determining, by the computer system, a time window starting at a predetermined time before the occurrence of the historical incident and ending at a time after the occurrence of the historical incident (Calculate the average of the remaining time data after screening. Sum the average time of each operation item to calculate the total time from the start of ice melting work to the start of the line. … The time node of each operation item is obtained on the historical operation ticket, and then the operation duration of each item is calculated, and the normal duration data is filtered. According to the process of ice melting, calculate when to dispatch workers, so that the ice watchers will start the ice watch work when they arrive at the ice watch site, to avoid the ice watchers waiting ineffective in the harsh climate environment, Wu, page 2, lines 38-40, 45-50 and Page 4, lines 14-16); Therefore, it would have been obvious to one ordinary skill in the art before the effective filing date of the claimed invention was made having the teachings of Savir, Earhart, Kuchibhotla and Wu before him/her, to further modify Savir with the teaching of Wu’s method for determining ice observation and dispatching time of power transmission line. One would have been motivated to do so for the benefit of acquiring the time node data of the operation items in a plurality of historical operation tickets and calculating total and average time (Wu, Abstract). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HASANUL MOBIN whose telephone number is (571)270-1289. The examiner can normally be reached on 9:30AM to 6:00PM EST M-F. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached at 571-272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HASANUL MOBIN/ Primary Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Apr 10, 2023
Application Filed
Nov 26, 2025
Non-Final Rejection — §103
Mar 02, 2026
Response Filed
Mar 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602398
SYNCHRONIZING STATE IN LARGE-SCALE DISTRIBUTION SYSTEMS
2y 5m to grant Granted Apr 14, 2026
Patent 12602390
DATA ANALYSIS SYSTEM AND METHOD
2y 5m to grant Granted Apr 14, 2026
Patent 12591542
DIRECTORY METADATA OPERATION METHOD AND APPARATUS, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12585668
EFFICIENT STATE SYNCHRONIZATION IN A CLUSTERED ENVIRONMENT USING COMPACTED KEY/TUPLE REPRESENTATIONS AND SNAPSHOT-BASED STATE RESTORATION
2y 5m to grant Granted Mar 24, 2026
Patent 12572504
DATA ORGANIZER OPTIMIZING RECONCILIATION SYSTEMS
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
75%
Grant Probability
99%
With Interview (+39.0%)
3y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 675 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month