Prosecution Insights
Last updated: April 17, 2026
Application No. 18/545,346

SYSTEM AND METHOD FOR AI-BASED GRAPH MANAGEMENT

Final Rejection §103
Filed
Dec 19, 2023
Examiner
SKHOUN, HICHAM
Art Unit
2164
Tech Center
2100 — Computer Architecture & Software
Assignee
unknown
OA Round
4 (Final)
77%
Grant Probability
Favorable
5-6
OA Rounds
3y 1m
To Grant
83%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
266 granted / 344 resolved
+22.3% vs TC avg
Moderate +6% lift
Without
With
+5.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
25 currently pending
Career history
369
Total Applications
across all art units

Statute-Specific Performance

§101
13.6%
-26.4% vs TC avg
§103
41.0%
+1.0% vs TC avg
§102
27.2%
-12.8% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 344 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION 2. Claims 1-8 and 10-20 were presented for examination and are pending. Claims 1-8 and 10- 20 have been rejected. No new matter is added. 3. This office action is in response to the REM filed 02/27/2026. 4. The office action is made Final. Examiner Note 5. The Examiner cites particular columns and line numbers in the references as applied to the claims below for the convenience of the Applicant(s). Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the Applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the Examiner. Claim Rejections - 35 USC § 103 6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: a) A patent may not be obtained through the invention is not identically disclosed or described as set forth in section 102 of this title, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negatived by the manner in which the invention was made. 8. Claims 1, 3-8, 12-16 and 18-20 are rejected under 35 U.S.C.103 as being unpatentable over Sinha et al (US 20210133612 A1) hereinafter as Sinha in view of Parameswaran et al (US 20200184376 A1) hereinafter as Parameswaran. 9. Regarding claim 1, Sinha teaches A system for an automated real-time management of a directed acyclic graph (DAG) based on predictive analytics of DAG input-related data (Fig 1, Fig 10 a network environment for generating graph data structures for using inter-feature dependencies in machine-learning models), comprising: a processor of a graph compute manager (GCM) node configured to host a machine learning (ML) module and connected to at least one DAG source entity node over a network (Fig 1 and Fig 10, computing device 104 (a graph compute manager (GCM) node) hosting a machine learning models 128 (ML module) and connected to input dataset 144 (DAG source entity node) over a network 108, [0021], “The graph data structure represents the input features as nodes.”); and a memory on which are stored machine-readable instructions that when executed by the processor (Fig 1, memory 112), cause the processor to: acquire the DAG input-related data from the at least one DAG source entity node ([0019], “automated modeling systems described herein are used to generate a graph data structure, such as a directed acyclic graph, that models’ dependencies of features configured to be input into a machine-learning model.”, [0021], “The graph data structure could be, for example, a directed acyclic graph that is generated from an analysis of the input features. The graph data structure represents the input features as nodes.”, Fig 1, [0034], “receive marketing data (the DAG input-related data) and parse the marketing data to define input features for a predictive model.”, “Program instructions 116 parse the data marketing data to determine input features and their values for the predictive model. [0042], “The graph data structure represents each input feature as a node. Edges connect pairs of nodes together. Edges are directed indicating one node of the pair (e.g., the destination node) is dependent on the other node (e.g., the source node).”, [0054], [0100], “[0100] The computing system 1000 can access input datasets 144 and the graph data structure 120 in any suitable manner.”); parse the DAG input-related data to derive a plurality of key features (Fig 1, [0034], “receive marketing data and parse the marketing data to define input features for a predictive model.”, “Program instructions 116 parse the data marketing data to determine input features and their values for the predictive model.”); query a local DAGs' database to retrieve local historical DAGs'-related data associated with previous DAG parameters based on the plurality of key features ([0026], “the input dataset (and/or a historical dataset) provides various values of the input features including the webpage visits and social media posts that are observed (e.g., either contemporaneously or historically).”, [0035], “The weight is defined using current and historical marketing data in which correlations between input features can be observed. In some instances, the graph data structure is generated using structured learning with continuous optimization.”, [0039], “Simulations 140 (local database) is a database that stores historical simulations performed by processing device 104… The simulations stored in simulations 140 can be used as baselines for scenario executed by processing device 104.”); generate at least one feature vector based on the plurality of key features and the local historical DAGs'-related data ([0055], “the processing device can use the input dataset and historical dataset to observe correlations between the values of pairs of input features (feature vector).”); provide the at least one feature vector to the ML module for generating a predictive model configured to produce at least one DAG update parameter for updating the DAG at the at least one DAG source entity, wherein the predictive model is dynamically trained based on iterative feature inputs and historical DAG performance metric ([0019], The automated modeling systems build predictive models by varying one or more input features and using the graph data structure to define the values for the remaining input features. The automated modeling systems then apply a trained machine-learning model to the input features to produce an accurate predictive output. “, [0024], “Thus, after the automated modeling systems generates an updated set of input features using feature dependencies modeled by the graph data structure, the automated modeling system applies the predictive model to the updated set of input features.”, [0036], [0049], “The user interface engine detects one or more modifications to input features (i.e., a simulation parameter) using a simulation event listener… The simulator 220 can thereby compute any corresponding changes to one or more other input features for use by a simulation.”, [0055], “The processing devices then iteratively adds, removes, or reverses an edge between two input features. With each iteration, the processing device computes a score of the resulting directed graph. For instance, the score of the directed graph increases when an edge connecting a source node to a destination is added, while the score of the directed graph decreases when an edge connecting two uncorrelated nodes is added”, [0057], [0068], “the processing device may rebuild the graph data structure using historical data (e.g., the same data used to originally construct graph data structure) or using contemporaneously acquired data.”, [0084], “The processing device iteratively modifies the graph by adding, removing, or reversing a directed edge and then determining a score of the resulting modification. If the score increases, the modification is retained. If the score decreases, the modification is discarded. The processing device continuously optimizes the directed graph until the score exceeds a threshold value or the score no longer increases between iterations.”, [0088], “the processing device rebuilds the directed graph in response to a low degree of correlation between two input features.”); wherein the at least one DAG update parameter modifies structure and execution flow of the DAG itself at the at least one DAG source entity by at least one of adding, removing, or reordering DAG nodes and edges based on monitored execution states ([0055], “The processing devices then iteratively adds, removes, or reverses an edge between two input features. With each iteration, the processing device computes a score of the resulting directed graph. For instance, the score of the directed graph increases when an edge connecting a source node to a destination is added, while the score of the directed graph decreases when an edge connecting two uncorrelated nodes is added. If the score increases, the addition, removal, or reversal of the edge during that iteration is maintained. If the score decreases, the addition, removal, or reversal of the edge during that iteration is omitted. The processing device continues to interactively add, remove, or reverse edges until a particular score threshold is reached or until the score can no longer be increased.”, [0057], [0068], “the processing device may rebuild the graph data structure using historical data (e.g., the same data used to originally construct graph data structure) or using contemporaneously acquired data.”, [0084], “The processing device iteratively modifies the graph by adding, removing, or reversing a directed edge and then determining a score of the resulting modification. If the score increases, the modification is retained. If the score decreases, the modification is discarded. The processing device continuously optimizes the directed graph until the score exceeds a threshold value or the score no longer increases between iterations.”, [0088], “the processing device rebuilds the directed graph in response to a low degree of correlation between two input features.”); , and Sinha implicitly teaches wherein the predictive model identifies and re-executes only a minimal subset of the DAG rather than re- executing the entire DAG ([0006], “a directed graph that includes nodes that represent the input features and edges that link nodes”, [0019], “The automated modeling systems build predictive models by varying one or more input features and using the graph data structure to define the values for the remaining input features.”, [0056], “continuously optimize the directed graph”, [0084], “The directed graph may be a directed acyclic graph. The processing device generates the directed graph using continuous optimization in which a graph of nodes representing input features is initialized. The processing device iteratively modifies the graph by adding, removing, or reversing a directed edge and then determining a score of the resulting modification…The processing device continuously optimizes the directed graph until the score exceeds a threshold value or the score no longer increases between iterations.”) and apply the at least one DAG update parameter to optimize real-time operation of the at least one DAG source entity node ([0035], “the graph data structure is generated using structured learning with continuous optimization.”, [0056], “A smooth directed acyclic graph constraint can be applied during building of the directed graph to smooth and continuously optimize the directed graph. The directed graph can then be generated using gradient decent-based approaches.”, [0060], “the baseline outcome corresponds to the real-world outcome since the input features correspond to observable input features.”, [0084], “The processing device generates the directed graph using continuous optimization in which a graph of nodes representing input features is initialized. The processing device iteratively modifies the graph by adding, removing, or reversing a directed edge and then determining a score of the resulting modification. If the score increases, the modification is retained. If the score decreases, the modification is discarded. The processing device continuously optimizes the directed graph until the score exceeds a threshold value or the score no longer increases between iterations.”). However, Parameswaran explicitly teaches wherein the predictive model identifies and re-executes only a minimal subset of the DAG rather than re- executing the entire DAG and apply the at least one DAG update parameter to optimize real-time operation of the at least one DAG source entity node ([0036], “determine an “optimal” set of tasks whose outputs should be stored for later re-use, so as to minimize future compute time and the costs of storing/loading the selected outputs.”, [0062], “We propose HELIX, a machine learning system that optimizes the execution across iterations—intelligently caching and reusing, or recomputing intermediates as appropriate.”, [0090], [0095-00100], “The DAG optimizer creates a physical plan GOPTWt to be executed by pruning and ordering the nodes in GWt and deciding whether any computation can be replaced with loading previous results from disk… [0100] During execution, the materialization optimizer determines which nodes in GOPTWt should be persisted to disk for future use.”, [0201], “Pruning optimizations are provided to eliminate redundant computations”, Fig 1D, steps 1130 to 150, [0243-0247]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the concept of teachings suggested in Parameswaran’s system into Sinha’s and by incorporating Parameswaran into Sinha because both systems are related to Machine learning workflows would provide a holistic optimization for accelerating iterative machine learning (Parameswaran, [0002]). 10. Regarding claim 3, Sinha and Parameswaran teaches the invention as claimed in claim 1 above and further Sinha teaches wherein the instructions further cause the processor to retrieve remote historical DAGs'-related data from at least one remote DAGs' database based on the local historical DAGs'-related data, wherein the remote historical DAGs'-related data is collected at third-party DAG source entities ([0033], “In the example depicted in FIG. 1, processing device 104 receives marketing data from remote sources via network 108 and generates various simulations of the versions of the marketing data to determine a particular, desirable outcome.”, [0052], “The input dataset can be accessed from local sources (e.g., local memory or locally connected storage devices) or remote sources (e.g., databases, servers, user devices, etc.).”, [0053], “Alternative scenarios are defined by the processing device automatically or by user input received from an input/output device or from a remote device over a network.”). 11. Regarding claim 4, Sinha and Parameswaran teaches the invention as claimed in claim 3 above and further Sinha teaches wherein the instructions further cause the processor to generate the at least one feature vector based on the plurality of key features, the local historical DAGs'-related data combined with the remote historical DAGs'-related data ([0055], “the processing device can use the input dataset and historical dataset to observe correlations between the values of pairs of input features (feature vector).”). 12. Regarding claim 5, Sinha and Parameswaran teaches the invention as claimed in claim 1 above and further Sinha teaches wherein the instructions further cause the processor to parse the DAG input-related data to derive a plurality of key features comprising graph nodes- related variables comprising placeholders for data occupying an input of a function or an output of a function ([0056], ”a processing device uses a loss function that accounts for the least square loss between the estimated data (e.g., the directed graph) and the actual data of the input dataset.”, “The modification to the dependent input features (e.g., the second input feature) is a function of at least (a) the modification of the first input feature and (b) a weight assigned to an edge linking the first input feature to the second input feature within the directed graph.”, [0077], “The value of each input is modeled by the function f(N.sub.i)=α.sub.i where i represents a particular node.”, [0079], “The value of dependent nodes is a function of at least (a) the modification of the input feature and (b) a weight assigned to an edge linking the first input feature to the second input feature within the directed graph.”, [0089], “At block 928, the processing device updates the destination value of the second input feature as a function of at least (a) the value of the input feature of the source node and (b) the updated weight.”). 13. Regarding claim 6, Sinha and Parameswaran teaches the invention as claimed in claim 1 above and further Sinha teaches wherein the instructions further cause the processor to parse the DAG input-related data to derive a plurality of key features associated with variables comprising: data assigned directly; reference ID associated with data from a data source ([0038], “The requests include an identification of a set of marketing data and a definition of the simulation and/or scenario.”, [0039], “Each simulation stored in simulations 140 includes an identification of the marketing data used to define and run the simulation enabling the simulation to be rerun by processing device 104.”, [0043], [0049], “he simulator 220 can use the identified input feature to reference a corresponding node of the graph data structure 204 and thereby determine any feature dependencies with respect to the identified input feature.”, [0059], “The processing device compares the sequence of outputs and identifies values for the input features that correspond to a deliverable outcome (e.g., that predicts an increase in the number of weekly purchases).”, [0069]). 14. Regarding claim 7, Sinha and Parameswaran teaches the invention as claimed in claim 1 above and further Sinha teaches wherein the instructions further cause the processor to continuously monitor incoming DAG input-related data to determine if at least one variable of the incoming DAG input-related data deviates from a value of previous DAGs'-related data by a margin exceeding a pre-set threshold value ([0027], [0055], “The processing device continues to interactively add, remove, or reverse edges until a particular score threshold is reached or until the score can no longer be increased.”, [0066], “the processing device samples based on the values that exceed a threshold probability (e.g., likely to occur in an observed dataset), particular predefined values, etc.”, [0068], “if the accuracy (e.g., matric 512) diverges by more than a threshold amount, the processing device may rebuild the graph data structure using historical data (e.g., the same data used to originally construct graph data structure) or using contemporaneously acquired data.”, [0084], “The processing device continuously optimizes the directed graph until the score exceeds a threshold value or the score no longer increases between iterations.”). 15. Regarding claim 8, Sinha and Parameswaran teaches the invention as claimed in claim 7 above and further Sinha teaches wherein the instructions further cause the processor to, responsive to the at least one variable of the incoming DAG input-related data deviating from the value of previous DAGs'-related data by the margin exceeding the pre-set threshold value, generate an updated feature vector based on the incoming DAG input-related data and generate a DAG update verdict based on the at least one DAG update parameter produced by the predictive model in response to the updated feature vector such that the model identifies and re-executes only a minimal subset of the DAG rather than re-executing the entire DAG ([0027], [0055], “The processing device continues to interactively add, remove, or reverse edges until a particular score threshold is reached or until the score can no longer be increased.”, [0066], “the processing device samples based on the values that exceed a threshold probability (e.g., likely to occur in an observed dataset), particular predefined values, etc.”, [0068], “if the accuracy (e.g., matric 512) diverges by more than a threshold amount, the processing device may rebuild the graph data structure using historical data (e.g., the same data used to originally construct graph data structure) or using contemporaneously acquired data.”, [0084], “The processing device continuously optimizes the directed graph until the score exceeds a threshold value or the score no longer increases between iterations.”). 16. Regarding claim 12, Sinha and Parameswaran teaches the invention as claimed in claim 7 above and further Sinha teaches wherein the instructions further cause the processor to map the at least one DAG update parameter to at least one reference ID ([0024], “Thus, after the automated modeling systems generates an updated set of input features using feature dependencies modeled by the graph data structure, the automated modeling system applies the predictive model to the updated set of input features. the predictive model generates an indication as to whether the modifications to the one or more input features will increase or decrease the probability that users will acquire the particular product or service.”, [0049], “The simulator 220 can use the identified input feature to reference a corresponding node of the graph data structure 204 and thereby determine any feature dependencies with respect to the identified input feature. The simulator 220 can thereby compute any corresponding changes to one or more other input features for use by a simulation.”). 17. Regarding claims 13-16 and 18, those claims recite a method performs the method of system claims, 6-8 and 12 respectively and are rejected under the same rationale. 18. Regarding claims 19-20, those claims recite non-transitory computer-readable medium comprising instructions, that when read by a processor, cause the processor to perform the method of claims 1 and 6 & 12 respectively and are rejected under the same rationale. 19. Claims 2, 10, 11 and 17 are rejected under 35 U.S.C.103 as being unpatentable over Sinha et al (US 20210133612 A1) hereinafter as Sinha in view of Parameswaran et al (US 20200184376 A1) and further in view of Ford (US 20210091957 A1) hereinafter as Ford. 20. Regarding claim 2, Sinha and Parameswaran teach the invention as claimed in claim 1 above and further Sinha teaches wherein the instructions further cause the processor to derive a language indicator from the DAG input-related data and to parse the DAG input-related data based on the language indicator to derive a plurality of key features ([0034], “Program instructions 116 include instruction that receive marketing data and parse the marketing data to define input features for a predictive model. For instance, marketing data can include structured and unstructured data that represent various data and data points of the system. Program instructions 116 parse the data marketing data to determine input features and their values for the predictive model.”, [0082], “The scenario can include modified values for a first input feature to predict an outcome that will result as a result of the modification. In response to the request, the processing device accesses the input dataset that correspond to the identification (e.g., using a remote procedure call to a database, etc.).”, [0097], “using any suitable computer-programming language”, [0106]). Sinha and Parameswaran do not specifically teach record the at least one DAG update parameter on a blockchain ledger along with the key features retrieved from the DAG input-related data for auditability and tamper-evident traceability. However, Ford teaches record the at least one DAG update parameter on a blockchain ledger along with the key features retrieved from the DAG input-related data for auditability and tamper-evident traceability ([0037], [0170], “the different training and testing steps (and the data associated therewith) may be stored on the blockchain 810 by the host platform 820. Each refinement of the machine learning model (e.g., changes in variables, weights, etc.) may be stored on the blockchain 810.”, Fig 8A, 8B. Examiner Note: the “auditability and tamper-evident traceability” in the claim is the intended use of recording DAG update parameter on a blockchain ledger along with the key features retrieved from the DAG input-related data (No weight)). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the concept of teachings suggested in Ford’s system into Sinha and Parameswaran combined system and by incorporating Ford into Sinha and Parameswaran combined system because all systems are related to an AI-based automated system would provide a solution that improves the consensus protocol of a DAG-based blockchain and overcomes these drawbacks and limitations (Ford, [0004]). 21. Regarding claim 10, Sinha and Parameswaran teach the invention as claimed in claim 1 above. Sinha and Parameswaran did not specifically teach wherein the instructions further cause the processor to retrieve the at least one DAG update parameter from the blockchain responsive to a consensus among the GCM node and the at least one DAG source entity node. However, Ford teaches wherein the instructions further cause the processor to retrieve the at least one DAG update parameter from the blockchain responsive to a consensus among the GCM node and the at least one DAG source entity node ([0037], [0043], [0047-0054], “consensus protocol/process”). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the concept of teachings suggested in Ford’s system into Sinha and Parameswaran combined system and by incorporating Ford into Sinha and Parameswaran combined system because all systems are related to an AI-based automated system would provide a solution that improves the consensus protocol of a DAG-based blockchain and overcomes these drawbacks and limitations (Ford, [0004]). 22. Regarding claim 11, Sinha and Parameswaran teach the invention as claimed in claim 1 above. Sinha and Parameswaran do not specifically teach wherein the instructions further cause the processor to execute a smart contract to record data reflecting generation of update DAG associated with the DAG input-related data and the at least one developer entity node on the blockchain for future audits. However, Ford teaches wherein the instructions further cause the processor to execute a smart contract to record data reflecting generation of update DAG associated with the DAG input-related data and the at least one developer entity node on the blockchain for future audits ([0038], [0075], [0080]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention to incorporate the concept of teachings suggested in Ford’s system into Sinha and Parameswaran combined system and by incorporating Ford into Sinha and Parameswaran combined system because all systems are related to an AI-based automated system would provide a solution that improves the consensus protocol of a DAG-based blockchain and overcomes these drawbacks and limitations (Ford, [0004]). 23. Regarding claim 17, this claim recites a method performs the method of system of claim 9 and is rejected under the same rationale. Respond to Amendments and Arguments 24. In the REM received 02/27/2026, Claims 1, 13, and 19 have been amended to include the subject matter of now canceled claim 9, and applicant argued that Sinha does not disclose "wherein the at least one DAG update parameter modifies structure and execution flow of the DAG itself at the at least one DAG source entity by at least one of adding, removing, or reordering DAG nodes and edges based on monitored execution states, and wherein the predictive model identifies and re-executes only a minimal subset of the DAG rather than re-executing the entire DAG." Finally, Sinha does not disclose avoiding re-executing the entire DAG because Sinha contains no teaching of computational efficiency through selective re-execution. 25. Applicant’s 35 U.S.C. § 102/103 arguments on claims have been fully considered but are moot in view of the new ground of rejection necessitated by applicant’s amendment presented above, 35 USC § 103. Parameswaran explicitly teaches wherein the predictive model identifies and re-executes only a minimal subset of the DAG rather than re- executing the entire DAG and apply the at least one DAG update parameter to optimize real-time operation of the at least one DAG source entity node ([0036], “determine an “optimal” set of tasks whose outputs should be stored for later re-use, so as to minimize future compute time and the costs of storing/loading the selected outputs.”, [0062], “We propose HELIX, a machine learning system that optimizes the execution across iterations—intelligently caching and reusing, or recomputing intermediates as appropriate.”, [0090], [0095-00100], “The DAG optimizer creates a physical plan GOPTWt to be executed by pruning and ordering the nodes in GWt and deciding whether any computation can be replaced with loading previous results from disk… [0100] During execution, the materialization optimizer determines which nodes in GOPTWt should be persisted to disk for future use.”, [0201], “Pruning optimizations are provided to eliminate redundant computations”, Fig 1D, steps 1130 to 150, [0243-0247]). CONCLUSION The Applicant’s amendment necessitated a new ground of rejection. Therefore, THIS ACTION IS MADE FINAL. Applicants are reminded of the extension of time policy as set forth in 37 C.F.R. § 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to HICHAM SKHOUN whose telephone number is (571)272-9466. The examiner can normally be reached Normal schedule: Mon-Fri 10am-6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Amy Ng can be reached at 5712701698. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HICHAM SKHOUN/Primary Examiner, Art Unit 2164
Read full office action

Prosecution Timeline

Dec 19, 2023
Application Filed
Aug 06, 2024
Response after Non-Final Action
Jan 23, 2025
Non-Final Rejection — §103
Apr 28, 2025
Response Filed
May 10, 2025
Final Rejection — §103
Jun 09, 2025
Interview Requested
Jun 17, 2025
Examiner Interview Summary
Jun 17, 2025
Applicant Interview (Telephonic)
Sep 15, 2025
Request for Continued Examination
Oct 01, 2025
Response after Non-Final Action
Oct 27, 2025
Non-Final Rejection — §103
Feb 27, 2026
Response Filed
Mar 25, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591552
Distributed File System that Provides Scalability and Resiliency
2y 5m to grant Granted Mar 31, 2026
Patent 12561304
DISTRIBUTABLE HASH FILTER FOR NONPROBABILISTIC SET INCLUSION
2y 5m to grant Granted Feb 24, 2026
Patent 12536141
DEFRAGMENTATION FOR LOG STRUCTURED MERGE TREE TO IMPROVE READ AND WRITE AMPLIFICATION
2y 5m to grant Granted Jan 27, 2026
Patent 12511292
CLUSTER VIEWS FOR COMPUTE SCALE AND CACHE PRESERVATION
2y 5m to grant Granted Dec 30, 2025
Patent 12481672
METRICS MANAGEMENT SYSTEM
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
77%
Grant Probability
83%
With Interview (+5.6%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 344 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month