Prosecution Insights
Last updated: April 19, 2026
Application No. 18/526,365

SYSTEMS AND METHODS FOR PREDICTING AND MANAGING OVERCAPACITY IN SYSTEM NETWORK

Non-Final OA §103
Filed
Dec 01, 2023
Examiner
HACKENBERG, RACHEL J
Art Unit
2454
Tech Center
2400 — Computer Networks
Assignee
Optum Services (Ireland) Limited
OA Round
3 (Non-Final)
79%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
236 granted / 300 resolved
+20.7% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
35 currently pending
Career history
335
Total Applications
across all art units

Statute-Specific Performance

§101
4.9%
-35.1% vs TC avg
§103
53.2%
+13.2% vs TC avg
§102
14.2%
-25.8% vs TC avg
§112
17.8%
-22.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 300 resolved cases

Office Action

§103
DETAILED ACTION Notice of AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/12/2025 has been entered. Response to Arguments Applicant's arguments filed 11/12/2025 have been fully considered. Applicant argues that the prior art of record does not teach on the amendments to the claim, that “preferability of... one or more second systems for a reallocation of one or more users” away from a first system is “based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems”. In response to the argument, Examiner respectfully agrees. An updated search was conducted and a prior art was discovered to read on the above amendment: US 2020/0120037 Al (Zhang). Balakrishnan teaches on most of the limitations of the independent claims and on indicating a preferability of respective one or more second systems for a reallocation ([0055]). However, Balakrishnan is silent on wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems. Zhang teaches wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems; See Zhang, [0393] After the transport capacity scheduling server receives the request for looking for service requests from the available service provider, the estimated travel time from the location of the available service provider to the candidate region may be determined based on the operation of looking for service requests corresponding to the request for looking for service requests. [0407] If the determined prediction time period is 10:00 a.m. to 11:00 a.m. on M day based on the estimated travel time, the average transport capacity shortage of the candidate region and the average number count of available service providers in the candidate region may be determined based on service request information. It would have been obvious to modify Balakrishnan per Zhang as it would allow the modified system to provide timely services, allow for an available service provider can quickly arrive at the core location and improvement of the efficiency of transport capacity scheduling, see Zhang [0163]. Please see updated rejection below in view of: Claim(s) 1, 5-7, 11–13, and 15–18 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0011521 A1 (Balakrishnan) in view of US 2020/0120037 Al (Zhang). Claim(s) 2-4, 14, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0011521 A1 (Balakrishnan) in view of US 2020/0120037 Al (Zhang) further in view of US 2021/0160142 A1 (Thai). Claim(s) 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0011521 A1 (Balakrishnan) in view of US 2020/0120037 Al (Zhang) further in view of US 2020/0241921 A1 (Calmon). Claim Objections Claim(s) 18-20 is/are objected to because of the following informalities: Claim 18 recites “including systems volumes of users associated with the plurality of systems” in line 6-7. It should read “including . Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5-7, 11–13, and 15–18 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0011521 A1 (Balakrishnan) in view of US 2020/0120037 Al (Zhang). Regarding Claim 1: Balakrishnan teaches A computer-implemented method comprising: receiving, by one or more processors, real-time data associated with a plurality of systems, the real-time data including volumes of users associated with the plurality of systems; ([0055]-[0065] receiving, by a patient flow system, patient flow information about a plurality of units (e.g., systems) in a hospital.) generating, by the one or more processors, one or more features associated with a first system of the plurality of systems based on at least a portion of the real-time data; ([0066] “adapts parameters of a machine learning algorithm based on the retrospective hospital data”.) generating, by the one or more processors via input of the one or more features into a machine learning model, a prediction that the first system is approaching a user volume capacity threshold; ([0066] Feeding the adapted parameters into a modelling pipeline and subsequently “predicted patient flow is based on output from the adapted machine learning algorithm”.) determining, by the one or more processors, one or more probabilities indicating a preferability of respective one or more second systems for a reallocation of one or more users from the first system, ([0055] As part of the modelling pipeline… determining a transition probabilities, comprising probabilities/trajectories of patients transitioning between wards/units of a hospital. Forecast short-term transition probabilities and predicted transitions over periods of time.) and simulating, by the one or more processors, the reallocation across the plurality of systems based on the one or more probabilities associated with the respective one or more second systems. ([0060]-[0064] Wherein the pipeline includes a Simulation model, and simulating the capacity prediction of the plurality of wards/units and the transition probabilities within a simulation engine.) Balakrishnan teaches on indicating a preferability of respective one or more second systems for a reallocation ([0055]). However, Balakrishnan is silent on wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems. Zhang teaches, in the same field of endeavor, systems and methods for transport capacity scheduling. The systems and methods may determine a target region, wherein a plurality of service requests that satisfy a preset condition initiate from the target region, Abstract. Zhang also teaches wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems; ([0393] After the transport capacity scheduling server receives the request for looking for service requests from the available service provider, the estimated travel time from the location of the available service provider to the candidate region may be determined based on the operation of looking for service requests corresponding to the request for looking for service requests. [0407] If the determined prediction time period is 10:00 a.m. to 11:00 a.m. on M day based on the estimated travel time, the average transport capacity shortage of the candidate region and the average number count of available service providers in the candidate region may be determined based on service request information.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, to modify Balakrishnan per Zhang to include wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems. This would have been advantageous as discussed above, as it would allow the modified system to provide timely services, allow for an available service provider can quickly arrive at the core location and improvement of the efficiency of transport capacity scheduling, see Zhang [0163]. Regarding Claim 13: Balakrishnan teaches A system comprising: one or more processors of a computing system; and one or more non-transitory computer readable media storing instructions that, when executed by the one or more processors, ([0041]-[0042]) cause the one or more processors to perform operations comprising: receiving real-time data associated with a plurality of systems, the realtime data including volumes of users associated with the plurality of systems; ([0055]-[0065] receiving, by a patient flow system, patient flow information about a plurality of units (e.g., systems) in a hospital.) generating one or more features associated with a first system of the plurality of systems based on at least a portion of the real-time data; ([0066] “adapts parameters of a machine learning algorithm based on the retrospective hospital data”.) generating, via input of the one or more features into a machine learning model, a prediction that the first system is approaching a user volume capacity threshold; ([0066] Feeding the adapted parameters into a modelling pipeline and subsequently “predicted patient flow is based on output from the adapted machine learning algorithm”.) determining one or more probabilities indicating a preferability of respective one or more second systems for a reallocation of one or more users from the first system, ([0055] As part of the modelling pipeline… determining a transition probabilities, comprising probabilities/trajectories of patients transitioning between wards/units of a hospital. Forecast short-term transition probabilities and predicted transitions over periods of time.) and simulating the reallocation across the plurality of systems based on the one or more probabilities associated with the respective one or more second systems. ([0060]-[0064] Wherein the pipeline includes a Simulation model, and simulating the capacity prediction of the plurality of wards/units and the transition probabilities within a simulation engine.) Balakrishnan teaches on indicating a preferability of respective one or more second systems for a reallocation ([0055]). However, Balakrishnan is silent on wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems. Zhang teaches wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems; ([0393] After the transport capacity scheduling server receives the request for looking for service requests from the available service provider, the estimated travel time from the location of the available service provider to the candidate region may be determined based on the operation of looking for service requests corresponding to the request for looking for service requests. [0407] If the determined prediction time period is 10:00 a.m. to 11:00 a.m. on M day based on the estimated travel time, the average transport capacity shortage of the candidate region and the average number count of available service providers in the candidate region may be determined based on service request information.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, to modify Balakrishnan per Zhang to include wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems. This would have been advantageous as discussed above, as it would allow the modified system to provide timely services, allow for an available service provider can quickly arrive at the core location and improvement of the efficiency of transport capacity scheduling, see Zhang [0163]. Regarding Claim 18: Balakrishnan teaches One or more non-transitory computer readable media storing instructions that, when executed by one or more processors of a computing system, ([0041]-[0042]) cause the one or more processors to perform operations comprising: receiving real-time data associated with a plurality of systems, the real-time data including volumes of users associated with the plurality of systems; ([0055]-[0065] receiving, by a patient flow system, patient flow information about a plurality of units (e.g., systems) in a hospital.) generating one or more features associated with a first system of the plurality of systems based on at least a portion of the real-time data; ([0066] Feeding the adapted parameters into a modelling pipeline and subsequently “predicted patient flow is based on output from the adapted machine learning algorithm”.) generating, via input of the one or more features into a machine learning model, a prediction that the first system is approaching a user volume capacity threshold; ([0066] Feeding the adapted parameters into a modelling pipeline and subsequently “predicted patient flow is based on output from the adapted machine learning algorithm”.) determining one or more probabilities indicating a preferability of respective one or more second systems for a reallocation of one or more users from the first system, ([0055] As part of the modelling pipeline… determining a transition probabilities, comprising probabilities/trajectories of patients transitioning between wards/units of a hospital. Forecast short-term transition probabilities and predicted transitions over periods of time.) and simulating the reallocation across the plurality of systems based on the one or more probabilities associated with the respective one or more second systems. ([0060]-[0064] Wherein the pipeline includes a Simulation model, and simulating the capacity prediction of the plurality of wards/units and the transition probabilities within a simulation engine.) Balakrishnan teaches on indicating a preferability of respective one or more second systems for a reallocation ([0055]). However, Balakrishnan is silent on wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems. Zhang teaches wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems; ([0393] After the transport capacity scheduling server receives the request for looking for service requests from the available service provider, the estimated travel time from the location of the available service provider to the candidate region may be determined based on the operation of looking for service requests corresponding to the request for looking for service requests. [0407] If the determined prediction time period is 10:00 a.m. to 11:00 a.m. on M day based on the estimated travel time, the average transport capacity shortage of the candidate region and the average number count of available service providers in the candidate region may be determined based on service request information.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, to modify Balakrishnan per Zhang to include wherein the preferability of the respective one or more second systems are based at least in part on distances of travel of the one or more users from the first system to the respective one or more second systems. This would have been advantageous as discussed above, as it would allow the modified system to provide timely services, allow for an available service provider can quickly arrive at the core location and improvement of the efficiency of transport capacity scheduling, see Zhang [0163]. Regarding Claims 5, 15: Balakrishnan (as modified by Zhang) teaches the inventions of claims 1, 13 as described. Balakrishnan teaches wherein simulating the reallocation across the plurality of systems comprises: grouping, by the one or more processors, the one or more users into one or more subsets (ie. categories) based on the one or more probabilities; ([0049][0055][0066][0033] Transition probability model takes into account resource availability as parameter input to determine the predicted transition of patients. [0052] Applicant has further recognized and appreciated that clinical conditions are a major factor influencing a length of stay of a patient. In embodiments, the length of stay model 130 is based on the clinical condition of the patients. Clinical condition information of the patients can be divided into two categories, disease information and physiological data. [0059] The forecasted transition probabilities can be based on the identified patient type.) and simulating, by the one or more processors, the reallocation of the one or more subsets to the respective one or more second systems. ([0060][0066] The models 130, 132, and 134 simulate the trajectory of patients across the hospital or sub-wards or units at an individual level, based on co-morbidities and historic medical conditions and so on, mimicking their probable length of stay, probable number of arrivals to each ward or unit etc. The observable or otherwise measurable traits or characteristics of patients, or phenotyping, helps ensures consistency across arrivals, length of stay, and transition probabilities.) Regarding Claims 6, 16: Balakrishnan (as modified by Zhang) teaches the inventions of claims 1, 13 as described. Balakrishnan teaches further comprising: determining, by the one or more processors, that the simulating indicates that all of the plurality of systems are at or under capacity; ([0060] performing demand and capacity predictions for a plurality of wards/units. [0079] wherein wards may or may not be at capacity.) and initiating, by the one or more processors, one or more actions based on one or more of the prediction or the simulating, the one or more actions including providing a reallocation recommendation based on the one or more probabilities. ([0067]-[0073][0076][0033] recommending a rearrangement of resources/transitions to accommodate the capacity prediction.) Regarding Claims 7, 17: Balakrishnan (as modified by Zhang) teaches the inventions of claims 1, 13 as described. Balakrishnan teaches further comprising: determining, by the one or more processors, that the simulating indicates overcapacity at one or more of the plurality of systems; ([0005] [0033][0069][0075] separate examples wherein overcapacity may be determined.) generating, by the one or more processors, multiple sets of one or more alternative probabilities indicating a modified preferability of the respective one or more second systems for reallocating the one or more users from the first system; ([0068]-[0071],[0074],[0003]-[0007] Each iteration effectively generates multiple recommendations and recommendations include different arrangements/rearrangements of resources and transitions.) re-simulating, by the one or more processors, the reallocation across the plurality of systems multiple times based on the respective multiple sets of one or more alternative probabilities, the re-simulating associated with a reduced risk of overcapacity to the plurality of systems; ([0067]-[0074] simulating the rearrangement of resources across wards/units of a hospital (e.g., network) … wherein the simulation is reiterated.) selecting, by the one or more processors and from the re-simulating, a re-simulation that is associated with a set of one or more alternative probabilities that are most similar to the one or more probabilities; ([0073][0074] providing a graphical display of selectable recommendations, and receiving a selected recommendation, from the user.) and initiating, by the one or more processors, one or more actions based on one or more of the prediction or the selected re-simulation, the one or more actions including providing a reallocation recommendation based on the set of one or more alternative probabilities that corresponds to the selected re-simulation. ([0073][0074] rendering, on display, the selected recommendation and further information about the network and application thereof.) Regarding Claim 11: Balakrishnan (as modified by Zhang & Thai) teaches the invention of claim 1 as described. Balakrishnan teaches wherein the machine learning model utilizes exponential smoothing, autoregressive integrated moving average (ARIMA), or long short-term memory (LSTM) neural networks to analyze the one or more features for generating the prediction of overcapacity at any of the plurality of systems. ([0055] The past patient transition data can be used to train and test a forecasting model using, e.g., an exponential smoothing method.) Regarding Claim 12: Balakrishnan (as modified by Zhang & Thai) teaches the invention of claim 1 as described. Balakrishnan teaches wherein the machine learning model is a trained machine learning model that processes historical data associated with the plurality of systems to learn patterns indicative of an overcapacity event. ([0049][0055] Feeding, into the model, current and historical data regarding patients/hospitals.) Claim(s) 2-4, 14, 19-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0011521 A1 (Balakrishnan) in view of US 2020/0120037 Al (Zhang) further in view of US 2021/0160142 A1 (Thai). Regarding Claims 2, 19: Balakrishnan (as modified by Zhang) teaches the inventions of claims 1, 18 as described. Balakrishnan teaches wherein determining the one or more probabilities includes: generating, by the one or more processors, a graph representing the plurality of systems ([0039][0044][0069] FIG. 5 shows an example schematic graphical representation of a display of a capacity user interface 500. The display can comprise many different configurations and information. For example, the display can comprise the occupancy, capacity, and expected changes in the demand of resources to a hospital. The expected changes can comprise an expected surge in patients, for example. The predicted patient flow, carried out by simulation module 140 or engine 340, can generate a predicted demand or capacity output such as a predicted surge in patients at 5 pm as shown at 505.) Balakrishnan teaches on generating a flow graph ([0069]). However, Balakrishnan is silent on wherein determining the one or more probabilities includes: generating, by the one or more processors, a graph representing the plurality of systems as a plurality of nodes and connections between the plurality of systems as edges based on the real-time data associated with the plurality of systems, wherein individual nodes are defined, respectively, by one or more node attributes, wherein individual edges are defined, respectively, by one or more edge attributes. Thai teaches, in the same field of endeavor, topology information including a plurality of snapshots of a network topology associated with respective points in time for a network can be received by an apparatus, Abstract. Thai also teaches wherein determining the one or more probabilities includes: generating, by the one or more processors, a graph representing the plurality of systems as a plurality of nodes and connections between the plurality of systems as edges based on the real-time data associated with the plurality of systems, wherein individual nodes are defined, respectively, by one or more node attributes, wherein individual edges are defined, respectively, by one or more edge attributes. ([0027] Operation 130 modifies each snapshot to derive modified topology information representing a modified network topology and replaces the set of edges that start from each scale node in a group of scale nodes with a single "aggregated edge," which can be weighted based on an average of the weights of the combined set of edges, or using another suitable technique for combining the edge weights. [0028] FIG. 2A shows a graph 200A including nodes A-F connected by edges in accordance with the network topology at a first point in time, and FIG. 2B shows a graph 200B including nodes A-F connected by edges in accordance with the network topology at a second point in time. [0087][0088] Historical topological data to generalize and abstract nodes (resources) by determining feature representations (feature vectors and node signatures). This makes it possible to determine improved correlation or similarity rules for identifying such patterns and behavior of the network using real time data records and/or topology information.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, to modify Balakrishnan (as modified by Zhang) by modifying Balakrishnan per Thai to include wherein determining the one or more probabilities includes: generating, by the one or more processors, a graph representing the plurality of systems as a plurality of nodes and connections between the plurality of systems as edges based on the real-time data associated with the plurality of systems, wherein individual nodes are defined, respectively, by one or more node attributes, wherein individual edges are defined, respectively, by one or more edge attributes. This would have been advantageous as discussed above, as it would allow the combined system to provide a more in-depth visualization which allows for a more target and accurate analysis. Regarding Claim 3: Balakrishnan (as modified by Zhang & Thai) teaches the invention of claim 2 as described. Balakrishnan teaches wherein determining the one or more probabilities further includes: processing, by the one or more processors, historical data associated with the plurality of systems to determine respective influences of individual edge attributes on past user reallocations; ([0075] The method and system may receive updated information. The updated information may be historical hospital data, hospital capacity information, and/or patient clinical data. [0076] With the updated information, the system returns to steps 405, 410, and 420 of the method to update the machine learning algorithm. The system can then generate an updated predicted patient flow, and display the flow and/or any suggested rearrangements of resources.) assigning, by the one or more processors, respective values to individual edge attributes based on the respective determined influences; and calculating, by the one or more processors, an edge weight for individual edges based on the respective assigned values. ([0054] The patient arrival model 132 can be used to adaptively and accurately predict patient arrivals. The patient arrival model 132 can be based on the Poisson arrival probabilities, where historical data is used to compute hourly arrival rates. … a hyper-parameter can be trained for each day-of-week such that it optimizes the weighted adjustment of the baseline arrival rates. Training of a hyper-parameter, also known as gamma, can ensure that historical and recent trend data is balanced. The gamma can be trained to optimize the weighted adjustments of historical data using the most recent trends, thus achieving a blended model, with higher accuracy in forecasting.) Regarding Claim 4: Balakrishnan (as modified by Zhang & Thai) teaches the invention of claim 3 as described. Balakrishnan teaches on generating a flow graph ([0069]). However, Balakrishnan (as modified by Zhang) is silent on wherein determining the one or more probabilities further includes: normalizing, by the one or more processors, the edge weight for individual edges to output individual probabilities, wherein normalizing comprises dividing the edge weight for individual edges by a sum of edge weights of all the edges. Thai teaches wherein determining the one or more probabilities further includes: normalizing, by the one or more processors, the edge weight for individual edges to output individual probabilities, wherein normalizing comprises dividing the edge weight for individual edges by a sum of edge weights of all the edges. ([0027] Operation 130 modifies each snapshot to derive modified topology information representing a modified network topology and replaces the set of edges that start from each scale node in a group of scale nodes with a single "aggregated edge," which can be weighted based on an average of the weights of the combined set of edges, or using another suitable technique for combining the edge weights. [0028] FIG. 2A shows a graph 200A including nodes A-F connected by edges in accordance with the network topology at a first point in time, and FIG. 2B shows a graph 200B including nodes A-F connected by edges in accordance with the network topology at a second point in time. [0087][0088] Historical topological data to generalize and abstract nodes (resources) by determining feature representations (feature vectors and node signatures).) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, to modify Balakrishnan (as modified by Zhang) by modifying Balakrishnan per Thai to include wherein determining the one or more probabilities further includes: normalizing, by the one or more processors, the edge weight for individual edges to output individual probabilities, wherein normalizing comprises dividing the edge weight for individual edges by a sum of edge weights of all the edges. This would have been advantageous as discussed above, as it would allow the combined system to provide a more in-depth and targeted analysis. Regarding Claim 14: Balakrishnan (as modified by Zhang) teaches the invention of claim 13 as described. Balakrishnan teaches wherein determining the one or more probabilities includes: generating a graph representing the plurality of systems ([0039][0044][0069] FIG. 5 shows an example schematic graphical representation of a display of a capacity user interface 500. The display can comprise many different configurations and information. For example, the display can comprise the occupancy, capacity, and expected changes in the demand of resources to a hospital. The expected changes can comprise an expected surge in patients, for example. The predicted patient flow, carried out by simulation module 140 or engine 340, can generate a predicted demand or capacity output such as a predicted surge in patients at 5 pm as shown at 505.) processing historical data associated with the plurality of systems to determine respective influences of individual edge attributes that defines the edges on past user reallocations; ([0075] The method and system may receive updated information. The updated information may be historical hospital data, hospital capacity information, and/or patient clinical data. [0076] With the updated information, the system returns to steps 405, 410, and 420 of the method to update the machine learning algorithm. The system can then generate an updated predicted patient flow, and display the flow and/or any suggested rearrangements of resources.) assigning values to individual edge attributes based on the respective determined influences; calculating an edge weight for individual edges based on the respective assigned values; ([0054] The patient arrival model 132 can be used to adaptively and accurately predict patient arrivals. The patient arrival model 132 can be based on the Poisson arrival probabilities, where historical data is used to compute hourly arrival rates. … a hyper-parameter can be trained for each day-of-week such that it optimizes the weighted adjustment of the baseline arrival rates. Training of a hyper-parameter, also known as gamma, can ensure that historical and recent trend data is balanced. The gamma can be trained to optimize the weighted adjustments of historical data using the most recent trends, thus achieving a blended model, with higher accuracy in forecasting.) Balakrishnan teaches on generating a flow graph ([0069]). However, Balakrishnan (as modified by Zhang) is silent on wherein determining the one or more probabilities includes: generating a graph representing the plurality of systems as a plurality of nodes and connections between the plurality of systems as edges based on the real-time data associated with the plurality of systems and normalizing the edge weight for individual edges to output the one or more probabilities. Thai teaches wherein determining the one or more probabilities includes: generating a graph representing the plurality of systems as a plurality of nodes and connections between the plurality of systems as edges based on the real-time data associated with the plurality of systems and normalizing the edge weight for individual edges to output the one or more probabilities. ([0027] Operation 130 modifies each snapshot to derive modified topology information representing a modified network topology and replaces the set of edges that start from each scale node in a group of scale nodes with a single "aggregated edge," which can be weighted based on an average of the weights of the combined set of edges, or using another suitable technique for combining the edge weights. [0028] FIG. 2A shows a graph 200A including nodes A-F connected by edges in accordance with the network topology at a first point in time, and FIG. 2B shows a graph 200B including nodes A-F connected by edges in accordance with the network topology at a second point in time. [0087][0088] Historical topological data to generalize and abstract nodes (resources) by determining feature representations (feature vectors and node signatures). This makes it possible to determine improved correlation or similarity rules for identifying such patterns and behavior of the network using real time data records and/or topology information.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, to modify Balakrishnan (as modified by Zhang) by modifying Balakrishnan per Thai to include wherein determining the one or more probabilities includes: generating a graph representing the plurality of systems as a plurality of nodes and connections between the plurality of systems as edges based on the real-time data associated with the plurality of systems and normalizing the edge weight for individual edges to output the one or more probabilities. This would have been advantageous as discussed above, as it would allow the combined system to provide a more in-depth and targeted analysis. Regarding Claim 20: Balakrishnan (as modified by Zhang & Thai) teaches the invention of claim 19 as described. Balakrishnan teaches wherein determining the one or more probabilities associated with the respective one or more second systems further includes: processing historical data associated with the plurality of systems to determine respective influences of individual edge attributes on past resource user reallocations; ([0075] The method and system may receive updated information. The updated information may be historical hospital data, hospital capacity information, and/or patient clinical data. [0076] With the updated information, the system returns to steps 405, 410, and 420 of the method to update the machine learning algorithm. The system can then generate an updated predicted patient flow, and display the flow and/or any suggested rearrangements of resources.) assigning values to individual edge attributes based on the respective determined influences; calculating an edge weight for individual edges based on the respective assigned values; ([0054] The patient arrival model 132 can be used to adaptively and accurately predict patient arrivals. The patient arrival model 132 can be based on the Poisson arrival probabilities, where historical data is used to compute hourly arrival rates. … a hyper-parameter can be trained for each day-of-week such that it optimizes the weighted adjustment of the baseline arrival rates. Training of a hyper-parameter, also known as gamma, can ensure that historical and recent trend data is balanced. The gamma can be trained to optimize the weighted adjustments of historical data using the most recent trends, thus achieving a blended model, with higher accuracy in forecasting.) Balakrishnan teaches on generating a flow graph ([0069]). However, Balakrishnan (as modified by Zhang) is silent on normalizing the edge weight for individual edges to output the one or more probabilities. Thai teaches normalizing the edge weight for individual edges to output the one or more probabilities. ([0027] Operation 130 modifies each snapshot to derive modified topology information representing a modified network topology and replaces the set of edges that start from each scale node in a group of scale nodes with a single "aggregated edge," which can be weighted based on an average of the weights of the combined set of edges, or using another suitable technique for combining the edge weights. [0028] FIG. 2A shows a graph 200A including nodes A-F connected by edges in accordance with the network topology at a first point in time, and FIG. 2B shows a graph 200B including nodes A-F connected by edges in accordance with the network topology at a second point in time. [0087][0088] Historical topological data to generalize and abstract nodes (resources) by determining feature representations (feature vectors and node signatures).) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, to modify Balakrishnan (as modified by Zhang) by modifying Balakrishnan per Thai to include wherein determining the one or more probabilities includes: generating a graph representing the plurality of systems as a plurality of nodes and connections between the plurality of systems as edges based on the real-time data associated with the plurality of systems and normalizing the edge weight for individual edges to output the one or more probabilities. This would have been advantageous as discussed above, as it would allow the combined system to provide a more in-depth and targeted analysis. Claim(s) 8-10 is/are rejected under 35 U.S.C. 103 as being unpatentable over US 2023/0011521 A1 (Balakrishnan) in view of US 2020/0120037 Al (Zhang) further in view of US 2020/0241921 A1 (Calmon). Regarding Claim 8: Balakrishnan (as modified by Zhang) teaches the invention of claim 1 as described. Balakrishnan teaches on wherein the simulating of the reallocation across the plurality of systems is iterated ([0060]-[0074]). However, Balakrishnan (as modified by Zhang) is silent on wherein the simulating of the reallocation across the plurality of systems is iterated for a predetermined duration or until an uncertainty threshold is reached. Calmon teaches, in the same field of endeavor, configuration of reinforcement learning agents for resource allocation for iterative workloads, such as training Deep Neural Networks, Abstract. Calmon also teaches wherein the simulating of the reallocation across the plurality of systems is iterated for a predetermined duration or until an uncertainty threshold is reached. ([0033][0004]wherein a simulation includes a duration criteria, specifying the duration for the simulation to run.) It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention, to modify Balakrishnan (as modified by Zhang) by modifying Balakrishnan per Calmon to include wherein the simulating of the reallocation across the plurality of systems is iterated for a predetermined duration or until an uncertainty threshold is reached. This would have been advantageous as discussed above, as it would allow the combined system to provide scheduled simulations allowing for optimum timing and without required user input for duration. Regarding Claim 9: Balakrishnan (as modified by Zhang & Calmon) teaches the invention of claim 8 as described. Note: the uncertainty threshold is an optional claim feature in Claim 8 - which is unselected in the instant embodiment of the prior art, therefore the following features recite unselected optional subject matter: wherein iterating the simulating until the uncertainty threshold is reached comprises: calculating, by the one or more processors, an uncertainty value for individual iterated simulations, wherein the uncertainty value increases over individual iterated simulations; and pausing, by the one or more processors, the simulating upon determining the uncertainty value is equal to or above the uncertainty threshold. Regarding Claim 10: Balakrishnan (as modified by Zhang & Calmon) teaches the invention of claim 9 as described. Note: the uncertainty value is an optional claim feature in Claim 9, depends on an optional feature in Claim 8 - which is unselected in the instant embodiment of the prior art, therefore the following features recite unselected optional subject matter. wherein the uncertainty value indicates an average variance in the reallocation of the one or more users to the respective one or more second systems. Conclusion & Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to RACHEL J HACKENBERG whose telephone number is (571)272-5417. The examiner can normally be reached 9am-5pm M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Glenton B Burgess can be reached at (571)272-3949. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RACHEL J HACKENBERG/Primary Examiner, Art Unit 2454
Read full office action

Prosecution Timeline

Dec 01, 2023
Application Filed
May 10, 2025
Non-Final Rejection — §103
Jun 17, 2025
Applicant Interview (Telephonic)
Jun 30, 2025
Examiner Interview Summary
Aug 15, 2025
Response Filed
Sep 14, 2025
Final Rejection — §103
Oct 28, 2025
Applicant Interview (Telephonic)
Oct 28, 2025
Examiner Interview Summary
Nov 12, 2025
Request for Continued Examination
Nov 22, 2025
Response after Non-Final Action
Jan 23, 2026
Non-Final Rejection — §103
Mar 27, 2026
Examiner Interview Summary
Mar 27, 2026
Applicant Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12587464
FAULT INJECTION CONFIGURATION EQUIVALENCY TESTING
2y 5m to grant Granted Mar 24, 2026
Patent 12580819
DETERMINING SERVICE GROUP CAPACITY BASED ON AN AGGREGATE RISK METRIC
2y 5m to grant Granted Mar 17, 2026
Patent 12500823
SYSTEM AND METHOD FOR ENTERPRISE - WIDE DATA UTILIZATION TRACKING AND RISK REPORTING
2y 5m to grant Granted Dec 16, 2025
Patent 12495001
CAPACITY AWARE LOAD PACKING FOR LAYER-4 LOAD BALANCER
2y 5m to grant Granted Dec 09, 2025
Patent 12470508
RESTRICTING MESSAGE NOTIFICATIONS AND CONVERSATIONS BASED ON DEVICE TYPE, MESSAGE CATEGORY, AND TIME PERIOD
2y 5m to grant Granted Nov 11, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
79%
Grant Probability
99%
With Interview (+26.4%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 300 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month