Prosecution Insights
Last updated: April 19, 2026
Application No. 18/158,299

PRIVACY ENHANCED MACHINE LEARNING OVER GRAPH DATA

Non-Final OA §101§103§112
Filed
Jan 23, 2023
Examiner
FARROW, FELICIA
Art Unit
2437
Tech Center
2400 — Computer Networks
Assignee
International Business Machines Corporation
OA Round
1 (Non-Final)
60%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
95%
With Interview

Examiner Intelligence

Grants 60% of resolved cases
60%
Career Allow Rate
156 granted / 259 resolved
+2.2% vs TC avg
Strong +35% interview lift
Without
With
+34.8%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
37 currently pending
Career history
296
Total Applications
across all art units

Statute-Specific Performance

§101
8.1%
-31.9% vs TC avg
§103
58.0%
+18.0% vs TC avg
§102
10.1%
-29.9% vs TC avg
§112
17.5%
-22.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 259 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings are objected to as failing to comply with 37 CFR 1.84(p)(5) because they include the following reference character(s) not mentioned in the description: reference number 506 of Figure 5. Corrected drawing sheets in compliance with 37 CFR 1.121(d), or amendment to the specification to add the reference character(s) in the description in compliance with 37 CFR 1.121(b) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance. Specification Applicant is reminded of the proper language and format for an abstract of the disclosure. The abstract should be in narrative form and generally limited to a single paragraph on a separate sheet within the range of 50 to 150 words in length. The abstract should describe the disclosure sufficiently to assist readers in deciding whether there is a need for consulting the full patent text for details. The language should be clear and concise and should not repeat information given in the title. It should avoid using phrases which can be implied, such as, “The disclosure concerns,” “The disclosure defined by this invention,” “The disclosure describes,” etc. In addition, the form and legal phraseology often used in patent claims, such as “means” and “said,” should be avoided. The abstract of the disclosure is objected to because the abstract should avoid phrases which can be implied such as “…provided herein…”. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). The disclosure is objected to because of the following informalities: The equations presented in paragraph 81 are not legible; and the sliding property and dilation property equations presented in paragraph 84 are not legible. Appropriate correction is required. Claim Objections Claim 20 is objected to because of the following informalities: For claim 20, should “…determine, by the system…” instead be “….determine, by the [[system]] processor…”? Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f): (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f). The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f), is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f), except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f), because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a modeling component that trains” recited in claim 2; “an aggregation component that assembles” recited in claim 6; and “a budgeting component that determines” recited in claim 8. These components appear to be part of the system of claim 1, and not the computer executable components recited in claim 1. The specification fails to provide specific structure, material, or acts corresponding to the claimed “modeling component that trains”, “aggregation component that assembles”, and “budgeting component that determines”, see the 35 USC 112(a) and (b) rejections below. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f), it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f), applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f). Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. Claims 2-3 and 6- 8 are rejected under 35 U.S.C. 112(a) as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor at the time the application was filed, had possession of the claimed invention. For claims 2, 6, and 8, the written description fail to disclose the corresponding structure, material, or acts for performing the entire respective claimed function(s) and does not clearly link the structure, material, acts to the respective function(s). No association between an algorithm and structure can be found in the specification for the modeling component (in claim 2), the aggregation component (in claim 6), and the budgeting component in claim 8. Claims 2, 6, and 8 purports to invoke 35 USC 112(f); however, the examiner finds no association between these components and an algorithm and structure in the Applicant’s specification. Claims 3 and 7 are rejected as being dependent on, and failing to cure the deficiencies of, rejected claim 2. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 1-20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention. Independent claims 1, 9, and 15 recite “wherein the inference component avoids directly exposing the first party information in the prediction”. Avoiding is a term that is ambiguous and does not provide measurable results. The claim fail to particularly point out how the inference component which are instruction executes an avoiding step. It is unclear whether a machine learning algorithm is being executing to prevent exposing the first party information in the prediction, masking operation is performed, or another means. For claims 2, 6, and 8, claim limitations “modeling component that trains”, “an aggregation component that assembles”, and “a budgeting component that determines” invoke 35 U.S.C. 112(f). However, the written description fails to disclose the corresponding structure, material, or acts for performing the entire claimed function and to clearly link the structure, material, or acts to the function. The disclosure is devoid of any structure that performs the function in the claim. Therefore, the claim is indefinite and is rejected under 35 U.S.C. 112(b). Applicant may: (a) Amend the claim so that the claim limitation will no longer be interpreted as a limitation under 35 U.S.C. 112(f); (b) Amend the written description of the specification such that it expressly recites what structure, material, or acts perform the entire claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (c) Amend the written description of the specification such that it clearly links the structure, material, or acts disclosed therein to the function recited in the claim, without introducing any new matter (35 U.S.C. 132(a)). If applicant is of the opinion that the written description of the specification already implicitly or inherently discloses the corresponding structure, material, or acts and clearly links them to the function so that one of ordinary skill in the art would recognize what structure, material, or acts perform the claimed function, applicant should clarify the record by either: (a) Amending the written description of the specification such that it expressly recites the corresponding structure, material, or acts for performing the claimed function and clearly links or associates the structure, material, or acts to the claimed function, without introducing any new matter (35 U.S.C. 132(a)); or (b) Stating on the record what the corresponding structure, material, or acts, which are implicitly or inherently set forth in the written description of the specification, perform the claimed function. For more information, see 37 CFR 1.75(d) and MPEP §§ 608.01(o) and 2181. Claims 2-8 are rejected as being dependent on, and failing to cure the deficiencies of, rejected claim 1. Claims 10-14 are rejected as being dependent on, and failing to cure the deficiencies of, rejected claim 9. Claims 16-20 are rejected as being dependent on, and failing to cure the deficiencies of, rejected claim 20. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The claim(s) recite(s) “…generates an access rule that modifies access to first data of a graph database, wherein the first data comprises first party information identified as private; …executes a random walk for sampling a first graph of the graph database while employing the access rule, wherein the first graph comprises the first data;…based on the sampling, generates a prediction in response to a query, wherein the inference component avoids directly exposing the first party information in the prediction”. The limitations above pertain to a method steps that are directed to, under its broadest reasonable interpretation, covers performance of the limitations being a mental process abstract idea. The steps recited above can be manually performed by a human using pencil and paper. Therefore, nothing in the claimed elements preclude the steps from being practically performed manually by a human via a mental process using pencil and paper. If a claim under its broadest reasonable interpretation covers performance in the mind, or a human using pencil and paper, then it falls within the mental processing grouping of abstract idea. Accordingly, claims 1, 9, and 15 pertain to an abstract idea. This judicial exception is not integrated into a practical application. Claim 1 recite additional elements of the steps being executing by a processor via instructions stored in memory. Claims 9 and 15 recites generic computer components that executes the steps. These instructions are a processing component, a sampling component, and an inference components. These additional components wherein the method are computer implemented components are recited at a high level of generality such that the generic computer amounts to no more than mere instructions to apply the exception. Implementing an abstract idea on a generic computer does not integrate the abstract idea into a practical application nor add significantly more. Thus, the independent claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Accordingly, claims 1, 9, and 15 are not eligible under 35 USC 101. Claims 2, 7-8, 10, 13, 16, and 19 provide a further step of training a predicative model on the graph database and on access rule and employing the predictive model to generate the prediction in response to the query. It has been determined that the steps recited in said claims can also be achieve in the human mind using pencil and paper. The additional element of modeling element merely applies a machine learning model without disclosing any technological advances to the underlying machine learning technique. Applying a machine learning model without disclosing any technological advances to the underlying machine learning technique is not patentable. Merely applying generic machine learning techniques without providing technical innovation in the machine learning method are insufficient for patent eligibility, see Recentive Analytic, Inc v Fox Corp (April 18 2025). The budgeting and modeling components of claim 8 wherein the method are computer implemented components is recited at a high level of generality such that the generic computer component amount to no more than mere instructions to apply the exception. Implementing an abstract idea on a generic computer does not integrate the abstract idea into a practical application nor add significantly more. Thus, claims 2, 7, 10, 13, 16, and 19 are not eligible under 35 USC 101. Claims 3, 11, and 17 also provide step that can be achieve in the human mind using pencil and paper. The sampling component wherein the method are computer implemented components is recited at a high level of generality such that the generic computer component amount to no more than mere instructions to apply the exception. Implementing an abstract idea on a generic computer does not integrate the abstract idea into a practical application nor add significantly more. Thus, claims 3, 11, and 17 are not eligible under 35 USC 101. Claims 4-5, 12, and 18 merely limits the access rule limitations from the independent claims. This access rule as stated above can be generated via a human using pencil and paper. Therefore, for the same reasons above, claims 4-5, 12, and 18 recite an abstract idea and are not eligible under 35 USC 101. Claim 6 also provides step that can be achieve in the human mind using pencil and paper. The aggregation component wherein the method are computer implemented components is recited at a high level of generality such that the generic computer component amount to no more than mere instructions to apply the exception. Implementing an abstract idea on a generic computer does not integrate the abstract idea into a practical application nor add significantly more. Thus, claim 6 is not eligible under 35 USC 101. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-2, 4-10, 12-16, and 18- 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nerurkar et al US 20190026489 (hereinafter Nerurkar) in view of Nouri et al US 20220114456 (hereinafter Nouri). As to claim 1, Nerurkar teaches a system (Figure 1, Figure 8, and Figure 11 reveal differential private security system), comprising: a memory that stores computer executable components (Figure 11, reference number 1104 “Memory” that stores “Instructions”, see paragraphs 149-150 and 153); and a processor (Figure 11, reference number 1102 “Processor”) that executes the computer executable components stored in the memory (Figure 11, reference number 1124 “Instructions”, see paragraphs 149-150 and 153), wherein the computer executable components comprise: a processing component (Figure 11, reference number 1124 “Instructions”) that generates an access rule that modifies access to first data of a graph database (paragraphs 34-36 reveal access to the restricted data is specified in terms of a privacy budget. The privacy budget limits/modifies how much of the restricted data can be released, thus provided access rules to the private data. Privacy parameters also indicate the amount of restricted data to release from the database to the client in response to the query. The privacy parameters likewise indicate the amount of decrease in the relevant privacy budget (e.g., the budget for the client or entity with which the client is associated) in response to the query. The differentially private security system may apply a default set of privacy parameters to queries that do not specify the parameters. The values of the default privacy parameters may be determined based on the client, analyst, query type, and/or other factors), wherein the first data comprises first party information identified as private (paragraph 148 discloses obtaining level of differential privacy for a set of operations. The set of operations is modified based on the received level of differential privacy. The set of modified operation is performed on the set of accessed data. The data comes from the private database); a sampling component (Figure 11, reference number 1124 “Instructions”) that executes … sampling … [on] a first graph of the graph database[ decision tree data from data in a database] while employing the access rule (paragraph 133 reveals a model testing engine generates a series of cutoff thresholds based on numerical values of p. Samples values of p [from the decision tree wherein decision tree is graph data] are plotted on a range from 0 to 1. A series of k cutoff thresholds, or a series of intervals, are recursively identified by the median engine such that the number of elements of p in each interval is approximately equal. Specifically, the median engine recursively identifies the perturbed median for an interval and subsequently, its corresponding sub-intervals generated by dividing the interval by the identified perturbed median, until k thresholds are identified. Paragraph 40 reveals the restricted data is from the database); and an inference component (Figure 11, reference number 1124 “Instructions”) that, based on the sampling, generates a prediction in response to a query, wherein the inference component avoids directly exposing the first party information in the prediction (paragraphs 138-139 reveal synthetic database engine produce a response/prediction responsive to the differentially private security system receiving the query for transforming X into a synthetic database S, given privacy parameters (ε,δ). The synthetic database engine produces a DP response of a differentially private synthetic database query by projecting the elements of X to S using a projection matrix). Nerurkar does not teach a sampling component that executes a random walk for sampling a first graph of the graph database while employing [an] access rule, wherein the first graph comprises the first data. Nouri teaches a sampling component that executes a random walk for sampling a first graph of the graph database while employing the access rules, wherein the first graph comprises the first data (paragraphs 371, 373 disclose sampling the graphs using random walks. Paragraphs 196, 202 indicate that the data obtain comes from data structure/database, wherein paragraph 371 indicates the graph is loaded. After the graph is loaded, the embedding process perform a random walk on a node. The edges and the nodes of the graph contain the first data (see paragraphs 5 and 50). Paragraph 229 reveals that the sampling is limited to rules). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify Nerurkar’s sampling of the data with Nouri’s teachings of sampling graph data to bound a memory requirement by the number of sampled nodes and provide an optimization that leverages auxiliary information from a pre-processing step and reduces the memory utilization (paragraph 371 of Nouri). As to claim 2, the combination of Nerurkar in view Nouri teaches a modeling component (Nerurkar: Figure 11, reference number 1124 “Instructions”) that trains a predictive model on the graph database and on the access rule (Nerurkar: paragraphs 114-116 disclose a random forest classifier/predictive model is trained on training data (Xtrain, Ytrain). Xtrain is a matrix database (matrix is graph data) in which each column corresponds to a selected feature of interest to the client , and y is a column vector of already known labels indicating the category of a corresponding entry. Each entry in Xtrain has a one-to-one correspondence with a label entry in y. Upon being trained, the random forest classifier, or a classifier in general, receives a new data entry with selected feature values and generates an estimate of the category for the new entry. The random forest classifier is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate for the category of the data entry), wherein the inference component (Nerurkar: Figure 11, reference number 1124 “Instructions”) employs the predictive model to generate the prediction in response to the query (Nerurkar: paragraph 116 reveals the random forest classifier/predictive model is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate/prediction for the category of the data entry in response to per paragraph 114 a valid query). As to claim 4, the combination of Nerurkar in view Nouri teaches wherein the access rule comprises a limit on a quantity of visits to a node or an edge of the graph database (Nerurkar: paragraph 120 also reveals the random forest engine returns a DP response of a differentially private random forest query by perturbing the proportion of training entries at leaf nodes of each trained binary decision tree based on the equation provided in paragraph 120. Paragraphs 34-36 reveal access to the restricted data is specified in terms of a privacy budget. Paragraph 73 reveals access for the data based on the query is limited by definition of ε-differential privacy, where D is the space of all possible data objects/nodes X, X’ neighboring data objects). As to claim 5, the combination of Nerurkar in view Nouri teaches wherein the access rule comprises perturbing the graph database with additional data (Nerurkar: paragraph 120 also reveals the random forest engine returns a DP response of a differentially private random forest query by perturbing the proportion of training entries at leaf nodes of each trained binary decision tree (decision tree is a graph) based on the equation provided in paragraph 120). As to claim 6, the combination of Nerurkar in view Nouri teaches further comprising: an aggregation component (Nerurkar: Figure 11, reference number 1124 “Instructions”) that assembles a set of graph embeddings for the graph database (Nerurkar: paragraph 40 reveals the restricted data in the database includes training data describing features of entities relevant to a particular condition and build classifiers from the training data. Paragraph 114 discloses generating a trained random forest classifier that bins a series of feature values/vectors into one among multiple categories, given privacy parameters (ε,δ). The random forest classifier is trained on the selected columns of X. Bins are often derived from graph embeddings) and that generates a set of feature vectors based on the graph embeddings, wherein the set of feature vectors are employed by the inference component (Nerurkar: Figure 11, reference number 1124 “Instructions”) to generate the prediction (Nerurkar: paragraphs 114-116 disclose a random forest classifier/predictive model is trained on training data (Xtrain, Ytrain). Xtrain is a matrix database (matrix is graph data) in which each column corresponds to a selected feature of interest to the client , and y is a column vector of already known labels indicating the category of a corresponding entry. Each entry in Xtrain has a one-to-one correspondence with a label entry in y. Upon being trained, the random forest classifier, or a classifier in general, receives a new data entry with selected feature values and generates an estimate of the category for the new entry. The random forest classifier is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate/prediction for the category of the data entry). As to claim 7, the combination of Nerurkar in view Nouri teaches wherein the modeling component trains the predictive model using a differential privacy-stochastic gradient descent approach (Nerurkar: paragraphs 114-116 disclose a random forest classifier/predictive model is trained on training data (Xtrain, Ytrain); paragraphs 107-109 reveal the stochastic gradient engine minimizes a loss function on training data (Xtrain, Ytrain) to find optimal values of parameter vector. A general minimization problem arises for finding the optimal values over training data and is given by the loss function. The minimization is solved by stochastic gradient descent on the loss function. The stochastic gradient engine returns a DP response/prediction of a differentially private stochastic gradient query by perturbing the update of θ at one or more time steps of the stochastic gradient descent algorithm). As to claim 8, the combination of Nerurkar in view Nouri teaches further comprising: a budgeting component that determines a privacy budget that comprises a noise distribution employed by the modeling component to train the predictive model (Nerurkar: paragraph 24 discloses a privacy budget describes limits on how much of the restricted data can be released. Paragraph 38 further discloses the differentially private security system may then transform the released results in a way that enforces differential privacy to produce the DP response . These transformations may involve perturbing the process by which the DP query is produced from the analytical query and/or the perturbing the results released by the database with noise that provides the differential privacy specified by the privacy parameters while enforcing the privacy budget). As to claim 9, Nerurkar teaches a computer-implemented method (Figure 10 discloses a method), comprising: generating, by a system (Figure 1, Figure 8, and Figure 11 reveal differential private security system) operatively coupled to a processor (Figure 11, reference number 1102 “Processor”), an access rule that modifies access to first data of a graph database (paragraphs 34-36 reveal access to the restricted data is specified in terms of a privacy budget. The privacy budget limits on how much of the restricted data can be released, thus provided access rules to the private data. Privacy parameters also indicate the amount of restricted data to release from the database to the client in response to the query. The privacy parameters likewise indicate the amount of decrease in the relevant privacy budget (e.g., the budget for the client or entity with which the client is associated) in response to the query. The differentially private security system may apply a default set of privacy parameters to queries that do not specify the parameters. The values of the default privacy parameters may be determined based on the client, analyst, query type, and/or other factors), wherein the first data comprises first party information identified as private (paragraph 148 discloses obtaining level of differential privacy for a set of operations. The set of operations is modified based on the received level of differential privacy. The set of modified operation is performed on the set of accessed data. The data comes from private database); executing, by the system (Figure 1, Figure 8, and Figure 11 reveal differential private security system; Figure 11, reference number 1102 “Processor”), … sampling on a first graph of the graph database/[decision tree data from data in a database] while employing the access rule (paragraph 133 reveals model testing engine generates a series of cutoff thresholds based on numerical values of p. Samples values of p [from the decision tree wherein decision tree is graph data] are plotted on a range from 0 to 1. A series of k cutoff thresholds, or a series of intervals, are recursively identified by the median engine such that the number of elements of p in each interval is approximately equal. Specifically, the median engine recursively identifies the perturbed median for an interval and subsequently, its corresponding sub-intervals generated by dividing the interval by the identified perturbed median, until k thresholds are identified. Paragraph 40 reveals the restricted data is from the database); and based on the sampling, generating, by the system (Figure 1, Figure 8, and Figure 11 reveal differential private security system; Figure 11, reference number 1102 “Processor”), a prediction in response to a query, wherein the generating comprises avoiding directly exposing the first party information in the prediction (paragraphs 138-139 reveal synthetic database engine produce a response/prediction responsive to the differentially private security system receiving the query for transforming X into a synthetic database S, given privacy parameters (ε,δ). The synthetic database engine produces a DP response of a differentially private synthetic database query by projecting the elements of X to S using a projection matrix). Nerurkar does not teach executing a random walk for sampling a first graph of the graph database while employing the access rule, wherein the first graph comprises the first data. Nouri teaches executing a random walk for sampling a first graph of the graph database while employing the access rule, wherein the first graph comprises the first data (paragraphs 371, 373 disclose sampling the graphs using random walks. Paragraphs 196, 202 indicate that the data obtain comes from data structure/database, wherein paragraph 371 indicates the graph is loaded. After the graph is loaded, the embedding process perform a random walk on a node. The edges and the nodes of the graph contain the first data (see paragraphs 5 and 50). Paragraph 229 reveals that the sampling is limited to rules). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify Nerurkar’s sampling of the data with Nouri’s teachings of sampling graph data to bound a memory requirement by the number of sampled nodes and provide an optimization that leverages auxiliary information from a pre-processing step and reduces the memory utilization (paragraph 371 of Nouri). As to claim 10, the combination of Nerurkar in view Nouri teaches further comprising: training, by the system (Nerurkar: Figure 1, Figure 8, and Figure 11 reveal differential private security system), using a differential privacy-stochastic gradient descent approach, a predictive model on the graph database and on the access rule (Nerurkar: paragraphs 114-116 disclose a random forest classifier/predictive model is trained on training data (Xtrain, Ytrain). Xtrain is a matrix database (matrix is graph data) in which each column corresponds to a selected feature of interest to the client , and y is a column vector of already known labels indicating the category of a corresponding entry. Each entry in Xtrain has a one-to-one correspondence with a label entry in y. Upon being trained, the random forest classifier, or a classifier in general, receives a new data entry with selected feature values and generates an estimate of the category for the new entry. The random forest classifier is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate for the category of the data entry); and employing, by the system (Nerurkar: Figure 1, Figure 8, and Figure 11 reveal differential private security system), the predictive model to generate the prediction in response to the query (Nerurkar: paragraph 116 reveals the random forest classifier/predictive model is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate/prediction for the category of the data entry in response to per paragraph 114 a valid query). As to claim 12, the combination of Nerurkar in view Nouri teaches wherein the access rule comprises at least one of a limit on a quantity of visits to a node or an edge of the graph database or a perturbance of the graph database with additional data (Nerurkar: paragraph 120 also reveals the random forest engine returns a DP response of a differentially private random forest query by perturbing the proportion of training entries at leaf nodes of each trained binary decision tree based on the equation provided in paragraph 120. Paragraphs 34-36 reveal access to the restricted data is specified in terms of a privacy budget. Paragraph 73 reveals access for the data based on the query is limited by definition of ε-differential privacy, where D is the space of all possible data objects/nodes X, X’ neighboring data objects). As to claim 13, the combination of Nerurkar in view Nouri teaches further comprising: assembling, by the system (Nerurkar: Figure 1, Figure 8, and Figure 11 reveal differential private security system), a set of graph embeddings for the graph database (Nerurkar: paragraph 40 reveals the restricted data in the database includes training data describing features of entities relevant to a particular condition and build classifiers from the training data. Paragraph 114 discloses generating a trained random forest classifier that bins a series of feature values/vectors into one among multiple categories, given privacy parameters (ε,δ). The random forest classifier is trained on the selected columns of X. Bins are often derived from graph embeddings); generating, by the system (Nerurkar: Figure 1, Figure 8, and Figure 11 reveal differential private security system), a set of feature vectors based on the graph embeddings (Nerurkar: paragraphs 114-116 disclose a random forest classifier/predictive model is trained on training data (Xtrain, Ytrain). Xtrain is a matrix database (matrix is graph data) in which each column corresponds to a selected feature of interest to the client , and y is a column vector of already known labels indicating the category of a corresponding entry. Each entry in Xtrain has a one-to-one correspondence with a label entry in y. Upon being trained, the random forest classifier, or a classifier in general, receives a new data entry with selected feature values and generates an estimate of the category for the new entry. The random forest classifier is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate/prediction for the category of the data entry); and employing, by the system, the set of feature vectors to train the predictive model (Nerurkar: paragraphs 114-116 disclose a random forest classifier/predictive model is trained on training data (Xtrain, Ytrain). Xtrain is a matrix database (matrix is graph data) in which each column corresponds to a selected feature of interest to the client , and y is a column vector of already known labels indicating the category of a corresponding entry. Each entry in Xtrain has a one-to-one correspondence with a label entry in y. Upon being trained, the random forest classifier, or a classifier in general, receives a new data entry with selected feature values and generates an estimate of the category for the new entry. The random forest classifier is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate/prediction for the category of the data entry). As to claim 14, the combination of Nerurkar in view Nouri teaches further comprising: determining, by the system (Nerurkar: Figure 1, Figure 8, and Figure 11 reveal differential private security system), a privacy budget that comprises a noise distribution employed by the modeling component to train the predictive model (Nerurkar: paragraph 24 discloses a privacy budget describes limits on how much of the restricted data can be released. Paragraph 38 further discloses the differentially private security system may then transform the released results in a way that enforces differential privacy to produce the DP response. These transformations may involve perturbing the process by which the DP query is produced from the analytical query and/or the perturbing the results released by the database with noise that provides the differential privacy specified by the privacy parameters while enforcing the privacy budget). As to claim 15, Nerurkar teaches a computer program product (Figure 11, reference number 1104 “Memory” that stores “Instructions”, see paragraphs 149-150 and 153) facilitating a process for privacy-enhanced machine learning and inference (Figure 10 discloses a method. Paragraph 2 reveals the present invention generally relates to building classifiers used in computerized machine learning, and more specifically to preserving privacy of training data used to build a machine-learned classifier), the computer program product comprising a computer readable storage medium having program instructions embodied therewith (Figure 11, reference number 1104 “Memory” that stores “Instructions”, see paragraphs 149-150 and 153), the program instructions executable (Figure 11, reference number 1124 “Instructions”) by a processor (Figure 11, reference number 1102 “Processor”) to cause the processor (see paragraphs 149-150 and 153) to: generate, by the processor (Figure 11, reference number 1102 “Processor”), an access rule that modifies access to first data of a graph database (paragraphs 34-36 reveal access to the restricted data is specified in terms of a privacy budget. The privacy budget describes limits on how much of the restricted data can be released, thus provided access rules to the private data. Privacy parameters also indicate the amount of restricted data to release from the database to the client in response to the query. The privacy parameters likewise indicate the amount of decrease in the relevant privacy budget (e.g., the budget for the client or entity with which the client is associated) in response to the query. The differentially private security system may apply a default set of privacy parameters to queries that do not specify the parameters. The values of the default privacy parameters may be determined based on the client, analyst, query type, and/or other factors), wherein the first data comprises first party information identified as private (paragraph 148 discloses obtaining level of differential privacy for a set of operations. The set of operations is modified based on the received level of differential privacy. The set of modified operation is performed on the set of accessed data. The data comes from private database); execute, by the processor (Figure 11, reference number 1102 “Processor”), … sampling … a first graph of the graph database/[on decision tree data from data in a database] while employing the access rule, wherein the first graph comprises the first data (paragraph 133 reveals model testing engine generates a series of cutoff thresholds based on numerical values of p. Samples values of p [from the decision tree wherein decision tree is graph data] are plotted on a range from 0 to 1. A series of k cutoff thresholds, or a series of intervals, are recursively identified by the median engine such that the number of elements of p in each interval is approximately equal. Specifically, the median engine recursively identifies the perturbed median for an interval and subsequently, its corresponding sub-intervals generated by dividing the interval by the identified perturbed median, until k thresholds are identified. Paragraph 40 reveals the restricted data is from the database); and based on the sampling, generate, by the processor (Figure 11, reference number 1102 “Processor”), a prediction in response to a query, wherein the generating comprises avoiding directly exposing the first party information in the prediction (paragraphs 138-139 reveal synthetic database engine produce a response/prediction responsive to the differentially private security system receiving the query for transforming X into a synthetic database S, given privacy parameters (ε,δ). The synthetic database engine produces a DP response of a differentially private synthetic database query by projecting the elements of X to S using a projection matrix). Nerurkar does not teach execute a random walk for sampling a first graph of the graph database while employing the access rule, wherein the first graph comprises the first data. Nouri teaches execute a random walk for sampling a first graph of the graph database while employing the access rule, wherein the first graph comprises the first data (paragraphs 371, 373 disclose sampling the graphs using random walks. Paragraphs 196, 202 indicate that the data obtain comes from data structure/database, wherein paragraph 371 indicates the graph is loaded. After the graph is loaded, the embedding process perform a random walk on a node. The edges and the nodes contains the first data (see paragraphs 5 and 50). Paragraph 229 reveals that the sampling is limited to rules). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify Nerurkar’s sampling of the data with Nouri’s teachings of sampling graph data to bound a memory requirement by the number of sampled nodes and provide an optimization that leverages auxiliary information from a pre-processing step and reduces the memory utilization (paragraph 371 of Nouri). As to claim 16, the combination of Nerurkar in view Nouri teaches wherein the program instructions are further executable (Nerurkar: Figure 11, reference number 1124 “Instructions”, see paragraphs 149-150 and 153) by the processor to cause the processor to (Nerurkar: Figure 11, reference number 1102 “Processor”): train, by the processor (Nerurkar: Figure 11, reference number 1102 “Processor”), using a differential privacy-stochastic gradient descent approach, a predictive model on the graph database and on the access rule (Nerurkar: paragraphs 114-116 disclose a random forest classifier/predictive model is trained on training data (Xtrain, Ytrain). Xtrain is a matrix database (matrix is graph data) in which each column corresponds to a selected feature of interest to the client , and y is a column vector of already known labels indicating the category of a corresponding entry. Each entry in Xtrain has a one-to-one correspondence with a label entry in y. Upon being trained, the random forest classifier, or a classifier in general, receives a new data entry with selected feature values and generates an estimate of the category for the new entry. The random forest classifier is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate for the category of the data entry); and employ, by the processor (Nerurkar: Figure 11, reference number 1102 “Processor”), the predictive model to generate the prediction in response to the query (Nerurkar: paragraph 116 reveals the random forest classifier/predictive model is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate/prediction for the category of the data entry in response to per paragraph 114 a valid query). As to claim 18, the combination of Nerurkar in view Nouri teaches wherein the access rule comprises at least one of a limit on a quantity of visits to a node or an edge of the graph database or a perturbance of the graph database with additional data (Nerurkar: paragraph 120 also reveals the random forest engine returns a DP response of a differentially private random forest query by perturbing the proportion of training entries at leaf nodes of each trained binary decision tree based on the equation provided in paragraph 120. Paragraphs 34-36 reveal access to the restricted data is specified in terms of a privacy budget. Paragraph 73 reveals access for the data based on the query is limited by definition of ε-differential privacy, where D is the space of all possible data objects/nodes X, X’ neighboring data objects). As to claim 19, the combination of Nerurkar in view Nouri teaches wherein the program instructions are further executable (Nerurkar: Figure 11, reference number 1124 “Instructions”, see paragraphs 149-150 and 153) by the processor to cause the processor to (Nerurkar: Figure 11, reference number 1102 “Processor”): assemble, by the processor (Nerurkar: Figure 11, reference number 1102 “Processor”), a set of graph embeddings for the graph database (Nerurkar: paragraph 40 reveals the restricted data in the database includes training data describing features of entities relevant to a particular condition and build classifiers from the training data. Paragraph 114 discloses generating a trained random forest classifier that bins a series of feature values/vectors into one among multiple categories, given privacy parameters (ε,δ). The random forest classifier is trained on the selected columns of X. Bins are often derived from graph embeddings); generate, by the processor (Nerurkar: Figure 11, reference number 1102 “Processor”), a set of feature vectors based on the graph embeddings (Nerurkar: paragraphs 114-116 disclose a random forest classifier/predictive model is trained on training data (Xtrain, Ytrain). Xtrain is a matrix database (matrix is graph data) in which each column corresponds to a selected feature of interest to the client , and y is a column vector of already known labels indicating the category of a corresponding entry. Each entry in Xtrain has a one-to-one correspondence with a label entry in y. Upon being trained, the random forest classifier, or a classifier in general, receives a new data entry with selected feature values and generates an estimate of the category for the new entry. The random forest classifier is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate/prediction for the category of the data entry); and employ, by the processor (Nerurkar: Figure 11, reference number 1102 “Processor”), the set of feature vectors to train the predictive model (Nerurkar: paragraphs 114-116 disclose a random forest classifier/predictive model is trained on training data (Xtrain, Ytrain). Xtrain is a matrix database (matrix is graph data) in which each column corresponds to a selected feature of interest to the client , and y is a column vector of already known labels indicating the category of a corresponding entry. Each entry in Xtrain has a one-to-one correspondence with a label entry in y. Upon being trained, the random forest classifier, or a classifier in general, receives a new data entry with selected feature values and generates an estimate of the category for the new entry. The random forest classifier is an ensemble of individual binary decision tree classifiers (rules), in which each binary decision tree generates an estimate for the category of an entry. Given a new data entry, the random forest classifier aggregates the category estimates from each binary decision tree and produces a final estimate/prediction for the category of the data entry). As to claim 20, the combination of Nerurkar in view Nouri teaches wherein the program instructions are further executable (Nerurkar: Figure 11, reference number 1124 “Instructions”, see paragraphs 149-150 and 153) by the processor to cause the processor to (Nerurkar: Figure 11, reference number 1102 “Processor”): determine, by the system (Nerurkar: Figure 1, Figure 8, and Figure 11 reveal differential private security system), a privacy budget that comprises a noise distribution employed by the modeling component to train the predictive model (Nerurkar: paragraph 24 discloses a privacy budget describes limits on how much of the restricted data can be released. Paragraph 38 further discloses the differentially private security system may then transform the released results in a way that enforces differential privacy to produce the DP response. These transformations may involve perturbing the process by which the DP query is produced from the analytical query and/or the perturbing the results released by the database with noise that provides the differential privacy specified by the privacy parameters while enforcing the privacy budget). Claim(s) 3, 11, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Nerurkar et al US 20190026489 (hereinafter Nerurkar), in view of Nouri et al US 20220114456 (hereinafter Nouri), in further view of Zhuang et al CN 113792937 English Machine Translation (hereinafter Zhuang). As to claim 3, the combination of Nerurkar in view of Nouri teaches all the limitations presented in claim 2 above. The combination of Nerurkar in view of Nouri does not teach, but Zhuang teaches wherein the sampling component executes (page 14, paragraph 1 reveal the processor executes the program after receiving the execution instruction) the random walk with a restart probability (page 16, claim 4/second paragraph reveals the random walk is used to set the restart probability parameter back to the starting point as P). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify Nerurkar’s sampling of the data in view of Nouri’s teachings of sampling graph data with Zhuang’s teachings of random walk and restart probability to provide improved influence prediction methods based on graph neural network (page 3, second paragraph of Zhuang). As to claim 11, the combination of Nerurkar in view of Nouri teaches all the limitations presented in claim 10 above. The combination of Nerurkar in view of Nouri does not teach, but Zhuang teaches further comprising: executing, by the system (page 14, paragraph 1 reveal the processor executes the program after receiving the execution instruction), the random walk with a restart probability (page 14, paragraph 1 reveals the processor executes the program after receiving the execution instruction) the random walk with a restart probability (page 16, claim 4/second paragraph reveal the random walk is used to set the restart probability parameter back to the starting point as P). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify Nerurkar’s sampling of the data in view of Nouri’s teachings of sampling graph data with Zhuang’s teachings of random walk and restart probability to provide improved influence prediction methods based on graph neural network (page 3, second paragraph of Zhuang). As to claim 17, the combination of Nerurkar in view of Nouri teaches all the limitations presented in claim 16 above. The combination of Nerurkar in view of Nouri does not teach, but Zhuang teaches wherein the program instructions are further executable by the processor to cause the processor to (page 14, paragraph 1 reveals the processor executes the program after receiving the execution instruction): execute, by the processor, the random walk with a restart probability (page 14, paragraph 1 reveal the processor executes the program after receiving the execution instruction) the random walk with a restart probability (page 16, claim 4/second paragraph reveal the random walk is used to set the restart probability parameter back to the starting point as P). It would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to modify Nerurkar’s sampling of the data in view of Nouri’s teachings of sampling graph data with Zhuang’s teachings of random walk and restart probability to provide improved influence prediction methods based on graph neural network (page 3, second paragraph of Zhuang). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to FELICIA FARROW whose telephone number is (571)272-1856. The examiner can normally be reached M - F 7:30am-4:00pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexander Lagor can be reached at (571)270-5143. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /F.F/ Examiner, Art Unit 2437 /ALI S ABYANEH/ Primary Examiner, Art Unit 2437
Read full office action

Prosecution Timeline

Jan 23, 2023
Application Filed
Oct 23, 2023
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598186
INTELLIGENT RESOURCE ALLOCATION BASED ON SECURITY PROFILE OF EDGE DEVICE NETWORK
2y 5m to grant Granted Apr 07, 2026
Patent 12579299
USING VENDOR-INDEPENDENT PROTOCOLS TO PERFORM IDENTITY AND ACCESS MANAGEMENT FOR ELECTRONIC MEDICAL RECORD INSTANCES
2y 5m to grant Granted Mar 17, 2026
Patent 12572694
DATA PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12561421
DIAGNOSE INSTRUCTION TO EXECUTE VERIFICATION CERTIFICATE RELATED FUNCTIONS
2y 5m to grant Granted Feb 24, 2026
Patent 12549630
System And Method for Managing Data Stored in A Remote Computing Environment
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
60%
Grant Probability
95%
With Interview (+34.8%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 259 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month