Prosecution Insights
Last updated: April 19, 2026
Application No. 18/121,066

SYSTEM AND METHOD FOR PROVIDING ADAPTIVE PRESCRIPTIVE ANALYTICS FOR REMCOMMENDATION ENGINES USING MULTI-LAYER CORRELATION AND DATA ANALYTICS

Non-Final OA §103
Filed
Mar 14, 2023
Examiner
BIAGINI, CHRISTOPHER D
Art Unit
2445
Tech Center
2400 — Computer Networks
Assignee
Wise Creation International Limited
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
4y 5m
To Grant
91%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
281 granted / 486 resolved
At TC average
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
13 currently pending
Career history
499
Total Applications
across all art units

Statute-Specific Performance

§101
15.6%
-24.4% vs TC avg
§103
44.7%
+4.7% vs TC avg
§102
10.8%
-29.2% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 486 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to the rejections under 35 USC 101 have been fully considered and are persuasive in light of the amendments. Accordingly, the rejections are withdrawn. Applicant’s arguments with respect to the rejections under 35 USC 103 have been fully considered and are persuasive in light of the amendments. Accordingly, the rejections are withdrawn. However, upon further consideration, new grounds of rejection are made. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-8 are rejected under 35 U.S.C. 103 as being unpatentable over Neate (US Pub. No. 2023/0250717) in view of Duggan (US Pub. No. 2017/0178027). Regarding claim 1, Neate shows a system for providing adaptive prescriptive analytics using multi- layer correlation and data analytics, comprising: a recommendation engine implemented as a plurality of microservices (the predictive modeling and scaling system implemented via data analysis components such as containers, services, and models: see Figs. 1-2, [0024]-[0030], [0035], [0051]-[0052] and [0055]-[0056]) including at least a real time analytics engine (e.g., the software which considers current demand on a service: see [0025]), a batch analytics engine (e.g., the software which considers historical trends: see [0027]), and a forecast analytics engine (e.g., the software which forecasts future demand (see [0037], [0048], and [0051]); a telemetry interface (at least implicitly disclosed as the necessary interface which receives gathered data: see [0024] and [0037]) configured to collect application-layer performance metrics (e.g., current demand on resources: see [0025] and [0037]), service-mesh traffic metrics (e.g., demand for other service instances and adjacent related services: see [0026] and [0037]), and infrastructure-layer resource utilization metrics (e.g., network usage information: see [0037], [0041] -[0043]); a multi-layer correlation controller, connected with said recommendation engine, configured to execute correlation models that map the collected metrics to a workload bottleneck (e.g., machine learning models that are used to predict an overload: see [0032]-[0033], [0035]-[0036], [0045], [0051]-[0052]); a memory, connected with said recommendation engine and said multi-layer correlation controller, configured to store data and instructions for making recommendations for optimization based on the analyzed data (at least implicitly disclosed as the necessary memory which stores the computer instructions to implement proactive action to prevent the overload, such as allocating resources, “spinning up” containers, etc: see [0010] and [0033]); and an adaptive scaling controller, connected with said recommendation engine, configured to automatically issue scaling commands to a container orchestration platform to adjust a replica count or compute resources (at least implicitly disclosed as the necessary commands which cause resources to be allocated, such as VMs or containers: see [0010], [0033], [0069], [0073]). Neate does not explicitly show: that the workload bottleneck is associated with one of the microservices; that the scaling commands are to adjust the compute resources of only the associated microservice while maintaining other microservices unchanged. Duggan shows: a workload bottleneck associated with one of a plurality of microservices (e.g., determining a resource load and execution frequency for one of a plurality of models which accept input and produce output: see [0017], [0026], [0056]-[0059]); and scaling commands that are to adjust compute resources of an associated analytics microservice while maintaining other microservices unchanged (e.g., at least implicitly disclosed as the necessary commands that adjust execution frequency for a model, thereby adjusting resources up or down, or adjusting available resources for the model: see [0056]-[0059]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Neate with the teachings of Duggan in order to allow the analytics capability to scale with demand, thereby allowing it to be performant even when demand increases. Regarding claim 2, the combination shows the limitations of claim 1 as applied above and further shows wherein said multi-layer correlation controller uses multi-layer correlation and data analytics to analyze data and correlations between different layers of the system, and to identify which parts of said recommendation engine needed to be scaled up in response to increased data volume (see Duggan, [0030] and [0054], as combined above; and Neate, [0024]-[0028], [0037], [0041]-[0043], [0048]-[0049], [0051]-[0054]). Regarding claim 3, the combination shows the limitations of claim 1 as applied above and further shows wherein the data from various sources to be analyzed by said multi-layer correlation controller include application data, server data, and cloud instance data (see Duggan, [0030] and [0054], as combined above; and Neate, [0024]-[0028], [0037], [0041]-[0043], [0048]-[0049], [0051]-[0054]). Regarding claim 4, the combination shows the limitations of claim 1 as applied above and further shows a user interface, connected with said recommendation engine, for providing recommendations to a user for optimization implementation (see Duggan, [0052] and [0059]). Note that it would have been obvious to further modify Neate with these additional teachings of Duggan in order to give administrators visibility into the changes being proposed or implemented. Regarding claim 5, Neate shows a computer implemented method for providing adaptive prescriptive analytics using multi-layer correlation and data analytics, comprising the steps of: a) collecting runtime telemetry (see [0024] and [0037]) including application KPIs (e.g., current demand on resources: see [0025] and [0037]), service-mesh metrics (e.g., demand for other service instances and adjacent related services: see [0026] and [0037]), and infrastructure resource metrics (e.g., network usage information: see [0037], [0041], [0042]-[0043]); b) executing multi-layer correlation models to identify a workload bottleneck (e.g., machine learning models that are used to predict an overload: see [0032]-[0033], [0035]-[0036], [0045], [0051]-[0052]); c) automatically generating scaling instructions (at least implicitly disclosed as the necessary commands which cause resources to be allocated, such as VMs or containers: see [0010], [0033], [0069], [0073]); and d) applying the scaling instructions via a container orchestration system to modify compute resources (allocating resources, “spinning up” containers, etc: see [0010] and [0033]). Neate does not explicitly show: that the workload bottleneck corresponds to a specific analytics engine; that the scaling instructions are targeted to the identified analytics engine; and that the compute resources modified are of the identified specific analytics engine without scaling other engines. Duggan shows: a workload bottleneck corresponding to a specific analytics engine (e.g., determining a resource load and execution frequency for one of a plurality of models which accept input and produce output: see [0017], [0026], [0056]-[0059]); scaling instructions targeted to an identified analytics engine (e.g., at least implicitly disclosed as the necessary commands that adjust execution frequency for a model, thereby adjusting resources up or down, or adjusting available resources for the model: see [0056]-[0059]); and that compute resources modified are of the identified specific analytics engine without scaling other engines (adjusting execution frequency for a model, thereby adjusting resources up or down, or adjusting available resources for the model: see [0056]-[0059]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system of Neate with the teachings of Duggan in order to allow the analytics capability to scale with demand, thereby allowing it to be performant even when demand increases. Regarding claim 6, the combination shows the limitations of claim 5 as applied above and further shows wherein step b) comprises sub-steps of: b1) obtaining resource prediction data at a future time point based on workload data and resource utilization data through system resource demand prediction and resource management model established by the multi-layer correlation; and b2) identifying the bottlenecks and inefficiencies in the system based on the resource prediction data, workload data, resource utilization data and performance data through data dynamics by prescriptive analytics (see Duggan, [0030] and [0054], as combined above; and Neate, [0024]-[0028], [0037], [0041]-[0043], [0048]-[0049], [0051]-[0054]). Regarding claim 7, the combination shows the limitations of claim 5 as applied above and further shows a step of: providing a user with recommendations for implementing optimization (see Duggan, [0052] and [0059]). Note that it would have been obvious to further modify Neate with these additional teachings of Duggan in order to give administrators visibility into the changes being proposed or implemented. Regarding claim 8, the combination shows the limitations of claim 5 as applied above and further shows a step of: overriding or modifying recommendations as needed by means of the user interface (see Duggan, [0052], [0059], [0088], Fig. 8). Note that it would have been obvious to further modify Neate with these additional teachings of Duggan in order to give administrators control over functionality of the system. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Christopher D. Biagini whose telephone number is (571)272-9743. The examiner can normally be reached weekdays from 9 AM - 5 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Oscar Louie can be reached at (571) 270-1684. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. Christopher D. Biagini Primary Examiner Art Unit 2445 /Christopher Biagini/ Primary Examiner, Art Unit 2445
Read full office action

Prosecution Timeline

Mar 14, 2023
Application Filed
Jun 06, 2025
Non-Final Rejection — §103
Sep 08, 2025
Response Filed
Oct 16, 2025
Final Rejection — §103
Jan 16, 2026
Request for Continued Examination
Jan 26, 2026
Response after Non-Final Action
Feb 18, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603855
Apparatus, System and Methods For Managing Private Content Delivery In Association With a Shipment
2y 5m to grant Granted Apr 14, 2026
Patent 12574307
Computing Cluster for Providing Virtual Markers Based Upon Network Connectivity
2y 5m to grant Granted Mar 10, 2026
Patent 12568511
USER EQUIPMENTS, BASE STATIONS, AND METHODS
2y 5m to grant Granted Mar 03, 2026
Patent 12561695
COMMUNICATION NETWORK AND METHOD FOR ROUTING DATA MESSAGES ON NETWORKS HAVING DIFFERENT COMMUNICATION PROTOCOLS
2y 5m to grant Granted Feb 24, 2026
Patent 12562974
SYNTHETIC INFRASTRUCTURE TOPOLOGIES FOR GRAPH WORKLOAD PLACEMENT SIMULATIONS
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
91%
With Interview (+33.3%)
4y 5m
Median Time to Grant
High
PTA Risk
Based on 486 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month