Prosecution Insights
Last updated: April 19, 2026
Application No. 18/777,596

System and Method for Utilizing a Large Language Model (LLM) with Constraints Derived from Organizational Context

Final Rejection §103
Filed
Jul 19, 2024
Examiner
TOLENTINO, RODERICK
Art Unit
2439
Tech Center
2400 — Computer Networks
Assignee
Varonis Systems, Inc.
OA Round
2 (Final)
77%
Grant Probability
Favorable
3-4
OA Rounds
3y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 77% — above average
77%
Career Allow Rate
545 granted / 705 resolved
+19.3% vs TC avg
Strong +35% interview lift
Without
With
+35.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 4m
Avg Prosecution
25 currently pending
Career history
730
Total Applications
across all art units

Statute-Specific Performance

§101
15.7%
-24.3% vs TC avg
§103
56.2%
+16.2% vs TC avg
§102
11.9%
-28.1% vs TC avg
§112
8.3%
-31.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 705 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Detailed Action Office Action is in response to the reply filed by Applicant on 12/18/2025. Claim 8 has been Cancelled. Claims 1-7 and 9-20 are pending. This Office Action is Final. Information Disclosure Statement The information disclosure statement (IDS), submitted on 9/28/2025, is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1-4, 6, 7 and 11-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mantin et al. (US 2025/0111092) in view of Muller et al. (US 2011/0302180) and Gupta et al. (US 2025/0317474). As per claim 1, Mantin teaches a computerized method comprising:(a) receiving an original prompt that a querying user sends to a Large Language Model (LLM) that is operably connected to organizational data sources of an organization (Mantin, Paragraph 0037 recites “Turning to FIG. 2, in Block 202, a user query to the LLM is received. The user query may be received via a graphical user interface (GUI) widget. The GUI with the GUI widget may or may not obfuscate the existence of the LLM. For example, the GUI may be a help interface for the application that uses the LLM as a backend. As another example, the GUI may be a dedicated GUI for the LLM or may otherwise indicate that the user query would be transmitted to the LLM.”); (c) sending the adapted prompt, and not the original prompt, to the LLM for processing, and obtaining LLM-generated output from said LLM in response to said adapted prompt (Mantin, Paragraph 0041 recites “In Block 210, the LLM query is sent to the LLM. The LLM query is transmitted to the LLM using the application programming interface of the LLM. The LLM processes the LLM query to generate a response. The LLM is an artificial intelligence system that uses vast amounts of data to generate the LLM response. The LLM response is a natural language response that may be in virtually any natural language format and have virtually any content. The LLM response is transmitted via the API to the LLM query manager.”). But fails to teach (b) instead of executing said original prompt by the LLM, performing: (b1) obtaining user-related organizational context that pertains to characteristics of the querying user; (b2) obtaining data-related organizational context that pertains to data from which said LLM is expected to obtain information for responding to the original query; (b3) obtaining pre-defined organizational policy rules, that indicate which type of users are authorized to access which type of organizational data. However, in an analogous art Muller teaches (b) instead of executing said original prompt by the LLM, performing: (b1) obtaining user-related organizational context that pertains to characteristics of the querying user; (b2) obtaining data-related organizational context that pertains to data from which said LLM is expected to obtain information for responding to the original query; (b3) obtaining pre-defined organizational policy rules, that indicate which type of users are authorized to access which type of organizational data (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. And fails to teach (b0) crawling the organizational data sources; extracting, from the organizational data sources, extracted data that comprises at least: user permissions, organizational chart, and access logs; constructing, from said extracted data: (i) a first semantic index that is a first vectorized database reflecting user-related organizational context, and (ii) a second semantic index that is a second vectorized database reflecting data-related organizational context; (b4) based on (i) the user-related organizational context, and (ii) the data-related organizational context, and (iii) the pre-defined organizational policy rules, modifying the original prompt into an adapted prompt, by adding prompt-constraining instructions that (i) constrain the LIM to generate the LLM-generated output using only organizational data that the querying user is authorized to access under said pre-defined organizational policy rules, and (ii) cause exclusion of unauthorized organizational data or data-types from the LLM-generated output. However, in an analogous art Gupta teaches (b0) crawling the organizational data sources; extracting, from the organizational data sources, extracted data that comprises at least: user permissions, organizational chart, and access logs; constructing, from said extracted data: (i) a first semantic index that is a first vectorized database reflecting user-related organizational context, and (ii) a second semantic index that is a second vectorized database reflecting data-related organizational context (Gupta, Paragraph 0114 recites “LLM privacy protection tool module 302 may be utilized to determine a relevant role of an entity that queries an LLM as explained in the exemplary embodiment(s) above. LLM privacy protection tool module 302 may utilize the entity's relevant role to determine a level of granularity for a response to the entity's query. The level of granularity for the response may be based on an entitlement and may ensure that PII or other sensitive information is redacted with replacement.” And Paragraphs 0086-0087 recites “To determine an entitlement level, LPPT device 202 may utilize various information feeds to identify a querying entity, the querying entity's role or job title, the querying entity's place in a hierarchy, systems to which the querying entity has access, the querying entity's officer title, and other information that helps an aggregator component of LPPT device 202 pass such identified information into a classifier component of LPPT device 202 to determine the contextual level of detail to provide within a response. Thereby monitoring whether querying entities' attributes meet necessary criteria, helps prevent unauthorized access because there will be no context to provide an answer via the classifier when a querying entity does not meet any of the necessary criteria. This not only provides a specific case of a privacy attack, but actually an entire range of responses based on dynamically calculated entitlements against the aggregator for the necessary redaction or content moderation. The aggregator may combine any determined entitlement information with any corresponding information that has been identified, then the aggregator may utilize the combined information to apply context-based redaction.”); (b4) based on (i) the user-related organizational context, and (ii) the data-related organizational context, and (iii) the pre-defined organizational policy rules, modifying the original prompt into an adapted prompt, by adding prompt-constraining instructions that (i) constrain the LIM to generate the LLM-generated output using only organizational data that the querying user is authorized to access under said pre-defined organizational policy rules, and (ii) cause exclusion of unauthorized organizational data or data-types from the LLM-generated output (Gupta, Paragraphs 0106-0107 recites “At step S414, LLM privacy protection tool module 302 redacts the at least one client query textual response according to the result of the evaluation and the at least one current client privilege level. In an embodiment, LLM privacy protection tool module 302 may utilize the at least one AI/ML model to perform step S414. In an additional or alternative embodiment, step S414 may comprise: utilizing the result of the evaluation, and the at least one current client privilege level, to determine at least one redaction to be made to the at least one client query textual response; and making the at least one redaction to the at least one client query textual response. In yet a further embodiment, at step S414, LLM privacy protection tool module 302 may make the at least one redaction to the at least one client query textual response by performing at least one operation from among: data augmentation, attribute suppression, token masking, pseudonymization, generalization, swapping, data perturbation, synthetic data generation, data aggregation, and add random noise.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Gupta’s method and system for securing large language model services against privacy attacks with Mantin’s leakage detection for large language models because it offers the advantage of protecting secured data. As per claim 2, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Mantin further teaches wherein step (a) of receiving the original prompt comprises: intercepting the original prompt on a communication path from an electronic device of the querying user to said LLM (Mantin, Paragraph 0037 recites “Turning to FIG. 2, in Block 202, a user query to the LLM is received. The user query may be received via a graphical user interface (GUI) widget. The GUI with the GUI widget may or may not obfuscate the existence of the LLM. For example, the GUI may be a help interface for the application that uses the LLM as a backend. As another example, the GUI may be a dedicated GUI for the LLM or may otherwise indicate that the user query would be transmitted to the LLM.”); wherein step (b4) of modifying the original prompt comprises: modifying the original prompt on said communication path, wherein only the adapted prompt and not the original prompt is transferred to said LLM for processing (Mantin, Paragraphs 0039-0040 recites “In Block 206, the LLM query is created from the user query and the application context. The application context is appended to the user query. Further, at least one prohibited response instruction may be appended on the LLM query. Specifically, the prohibited response instruction(s) may be added before or after the user query to create the LLM query. The LLM firewall may also inject additional instructions into the query, such as to perform additional security operations. In Block 208, confidential information is gathered from the LLM query. The collector extracts the confidential information from the LLM query and populates the query record storage with the confidential information. The collector may optionally perform additional processing such as the processing described in FIG. 3.”). As per claim 3, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Mantin further teaches wherein step (a) of receiving the original prompt comprises: receiving the original prompt at said LLM; and transferring the original prompt, without processing the original prompt, to an LLM extension module that performs prompt adaptation operations of steps (b1) through (b4) and then transfers the adapted prompt to said LLM for processing (Mantin, Paragraph 0037 recites “Turning to FIG. 2, in Block 202, a user query to the LLM is received. The user query may be received via a graphical user interface (GUI) widget. The GUI with the GUI widget may or may not obfuscate the existence of the LLM. For example, the GUI may be a help interface for the application that uses the LLM as a backend. As another example, the GUI may be a dedicated GUI for the LLM or may otherwise indicate that the user query would be transmitted to the LLM.” And Paragraphs 0039-0040 recites “In Block 206, the LLM query is created from the user query and the application context. The application context is appended to the user query. Further, at least one prohibited response instruction may be appended on the LLM query. Specifically, the prohibited response instruction(s) may be added before or after the user query to create the LLM query. The LLM firewall may also inject additional instructions into the query, such as to perform additional security operations. In Block 208, confidential information is gathered from the LLM query. The collector extracts the confidential information from the LLM query and populates the query record storage with the confidential information. The collector may optionally perform additional processing such as the processing described in FIG. 3.”). As per claim 4, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Muller further teaches wherein step (b4) of modifying the original prompt comprises: constructing the adapted prompt by an Assistive LLM, that is pre-configured or pre-trained or fine-tuned to specialize in prompt engineering and LLM grounding, wherein the Assistive LLM receives as input: (i) the original prompt, and (ii) the user-related organizational context, and (iii) the data-related organizational context, and (iv) the pre-defined organizational policy rules (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. As per claim 6, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Muller further teaches wherein obtaining the user-related organizational context comprises: analyzing organizational data sources, and estimating to which peer groups said querying user belongs; and based on belonging or non-belonging of the querying user to one or more particular peer groups, determining whether the querying user is authorized or unauthorized to access a particular type of data (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).”) And Paragraph 0109 recites “As a result, the security runtime system 17 applies the security restrictions, as stored in metamodel 13, defined for the user by assigned user roles, by different people at different data levels (tables, subtables, rows, columns, entities) of model 23 relating to target database 19. Security runtime looks at each data level and applies security restrictions (the rules in the security data subsystem) as pertinent in modifying the query 11. This causes the data passed back via the REST call to be filtered horizontally (by target database 19 table rows) and vertically (by target database 19 table columns).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. As per claim 7, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Muller further teaches wherein obtaining the user-related organizational context comprises: analyzing organizational data sources, and estimating whether or not information that is expected to be returned by said LLM in response to the original query, is information that an organizational position of the querying user typically accesses and uses; and if not, then adapting the original query to cause exclusion of said information from the LLM-generated output (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).”) And Paragraph 0109 recites “As a result, the security runtime system 17 applies the security restrictions, as stored in metamodel 13, defined for the user by assigned user roles, by different people at different data levels (tables, subtables, rows, columns, entities) of model 23 relating to target database 19. Security runtime looks at each data level and applies security restrictions (the rules in the security data subsystem) as pertinent in modifying the query 11. This causes the data passed back via the REST call to be filtered horizontally (by target database 19 table rows) and vertically (by target database 19 table columns).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. As per claim 11, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Muller further teaches performing a block-or-adapt analysis of (i) said original query, and (ii) the pre-defined organizational policy rules, and (iii) the user-related organizational context, and (iv) the data-related organizational context; based on results of said block-or-adapt analysis, performing one of: (I) blocking the original query from being executed and not generating an adapted query to replace it; or (II) modifying the original query into said adapted query (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).”) And Paragraph 0109 recites “As a result, the security runtime system 17 applies the security restrictions, as stored in metamodel 13, defined for the user by assigned user roles, by different people at different data levels (tables, subtables, rows, columns, entities) of model 23 relating to target database 19. Security runtime looks at each data level and applies security restrictions (the rules in the security data subsystem) as pertinent in modifying the query 11. This causes the data passed back via the REST call to be filtered horizontally (by target database 19 table rows) and vertically (by target database 19 table columns).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. As per claim 12, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Mantin further teaches wherein modifying the original query comprises: adding to the original query a set of grounding rules and constraints, that indicate to said LLM that the LLM-generated output should not include a particular type of data (Mantin, Paragraphs 0039-0040 recites “In Block 206, the LLM query is created from the user query and the application context. The application context is appended to the user query. Further, at least one prohibited response instruction may be appended on the LLM query. Specifically, the prohibited response instruction(s) may be added before or after the user query to create the LLM query. The LLM firewall may also inject additional instructions into the query, such as to perform additional security operations. In Block 208, confidential information is gathered from the LLM query. The collector extracts the confidential information from the LLM query and populates the query record storage with the confidential information. The collector may optionally perform additional processing such as the processing described in FIG. 3.”). As per claim 13, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Muller further teaches producing different LLM-generated outputs, for two or more different users of said organization, that submitted said original query, based on different user-related organizational context that is obtained with regard to each of said users (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).”) And Paragraph 0109 recites “As a result, the security runtime system 17 applies the security restrictions, as stored in metamodel 13, defined for the user by assigned user roles, by different people at different data levels (tables, subtables, rows, columns, entities) of model 23 relating to target database 19. Security runtime looks at each data level and applies security restrictions (the rules in the security data subsystem) as pertinent in modifying the query 11. This causes the data passed back via the REST call to be filtered horizontally (by target database 19 table rows) and vertically (by target database 19 table columns).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. As per claim 14, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Muller further teaches based on the user-related organizational context, selectively causing said LLM to include or to exclude monetary amounts in said LLM-generated output (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).”) And Paragraph 0109 recites “As a result, the security runtime system 17 applies the security restrictions, as stored in metamodel 13, defined for the user by assigned user roles, by different people at different data levels (tables, subtables, rows, columns, entities) of model 23 relating to target database 19. Security runtime looks at each data level and applies security restrictions (the rules in the security data subsystem) as pertinent in modifying the query 11. This causes the data passed back via the REST call to be filtered horizontally (by target database 19 table rows) and vertically (by target database 19 table columns).” As written the term monetary is just listing a type of data to include of exclude, this essentially any data type can be filtered out by Mantin, based on the user of the invention). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. As per claim 15, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Muller further teaches based on the user-related organizational context, selectively causing said LLM to include or to exclude date data in said LLM-generated output (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).”) And Paragraph 0109 recites “As a result, the security runtime system 17 applies the security restrictions, as stored in metamodel 13, defined for the user by assigned user roles, by different people at different data levels (tables, subtables, rows, columns, entities) of model 23 relating to target database 19. Security runtime looks at each data level and applies security restrictions (the rules in the security data subsystem) as pertinent in modifying the query 11. This causes the data passed back via the REST call to be filtered horizontally (by target database 19 table rows) and vertically (by target database 19 table columns).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. As per claim 16, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Muller further teaches based on the user-related organizational context, selectively causing said LLM to include or to exclude passwords or access credentials in said LLM-generated output (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).” Password and credentials, would be known to be secure Data types). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. As per claim 17, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Muller further teaches based on the user-related organizational context, providing to two or more different users LLM-generated outputs that focus on different aspects of a project that is a subject of the original query (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. As per claim 18, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, Muller further teaches wherein said pre-defined organizational policy rules comprise one of: (i) LLM access constraints that are pre-defined for a particular religious institution, and that limit particular topics and particular keywords that the LLM is authorized to generate in response to queries from particular users of said particular religious institution; (ii) LLM access constraints that are pre-defined for a particular educational institution, and that limit particular topics and particular keywords that the LLM is authorized to generate in response to queries from particular users of said particular educational institution; (iii) LLM access constraints that are pre-defined for a particular home network, and that limit particular topics and particular keywords that the LLM is authorized to generate in response to queries from particular users of said particular home network (Muller, Paragraph 0043 recites “In one embodiment, the governance structure of repository 15 can tailor and control how services are provided and customized for specific users, groups and organizations.” Muller teaches that the services can be tailored to various types of organizations, so while it doesn’t not explicitly recite religious or educational, the system can be tailored to their or any type of organizational needs.). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. Regarding claims 19 and 20, claims 19 and 20 are directed to a system and a non-transitory storage medium associated with the method of claim 1. Claims 19 and 20 are of similar scope to claim 1, and are therefore rejected under similar rationale. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mantin et al. (US 2025/0111092), Muller et al. (US 2011/0302180) and Gupta et al. (US 2025/0317474) and in further view of Cochran et al. (US 2006/0149156). As per claim 5, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, but fails to teach wherein obtaining the user-related organizational context comprises: analyzing organizational data sources, and determining from event audit logs whether the querying user is authorized or unauthorized to access a particular type of data. However, in an analogous art Cochran teaches wherein obtaining the user-related organizational context comprises: analyzing organizational data sources, and determining from event audit logs whether the querying user is authorized or unauthorized to access a particular type of data (Cochran, Paragraph 0100 recites “In one implementation of the transmit data function, the user logs on to the HG client app. The application will check the credentials of the user against the list of authorized users and an audit log entry will be performed based on the success or failure of the attempt to log on.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Cochran’s Method And Apparatus For Transfer Of Captured Electrocardiogram Data with Mantin’s leakage detection for large language models because it offers the advantage of looking into a user’s logs to determine data authorization. Claim(s) 9 and 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Mantin et al. (US 2025/0111092), Muller et al. (US 2011/0302180) and Gupta et al. (US 2025/0317474) and in further view of Soeder (US 2012/0117644). As per claim 9, Mantin in combination with Muller and Gupta teaches he computerized method of claim 1, but fails to teach (d) instead of routing the LLM-generated output directly to the querying user, routing the LLM-generated output to a post-processing sanitization unit that checks whether or not the LLM-generated output complies with said pre-defined organizational policy rules. However, in an analogous art Soeder teaches (d) instead of routing the LLM-generated output directly to the querying user, routing the LLM-generated output to a post-processing sanitization unit that checks whether or not the LLM-generated output complies with said pre-defined organizational policy rules (Soeder, Paragraph 0080 recites “f the web request string (or a transformation thereof) matches the portion of the intercepted database query, and if the string contains a character that would modify the syntax of the database query, then the database query can be sanitized. Restated, the portion of the web request string that would modify the syntax of the database query can be modified to provide database query content without the modified syntax.”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Soeder’s System And Method For Internet Security with Mantin’s leakage detection for large language models because it offers the advantage of ensuring the data is ready for processing through data sanitization. As per claim 10, Mantin in combination with Muller, Gupta and Soeder teaches he computerized method of claim 9, Muller further teaches if the post-processing sanitization unit determines that the LLM-generated output does not comply with said pre-defined organizational policy rules, then: performing at said post-processing sanitization unit at least one of: (i) deleting particular portions of the LLM-generated output to make the LLM-generated output compliant with the said pre-defined organizational policy rules; (ii) masking particular portions of the LLM-generated output to make the LLM-generated output compliant with the said pre-defined organizational policy rules (Muller, Paragraphs 0023-0028 recites “In one embodiment, a method and/or system of controlling access to secured data in a database comprises: operatively coupling a repository to one or more databases storing secure data; configuring and employing the repository to intercept a user query of one of the databases; the repository being executable by a processor and the processor automatically determining from the intercepted query a user who generated the user query and a user role assigned to the user; based on determined user role, the processor automatically modifying the user query to filter out secure data for which the user does not have access rights (is ineligible or not allowed access); and applying the modified query to the one database to retrieve qualifying data (as authorized by user role).”). It would have been obvious to a person of ordinary skill in the art, at the earliest effective filing date to use Muller’s computer relational database method and system having role based access control with Mantin’s leakage detection for large language models because it offers the advantage of ensuring authorized users are utilizing protected data properly. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RODERICK TOLENTINO whose telephone number is (571)272-2661. The examiner can normally be reached Mon- Fri 8am-4pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Luu Pham can be reached at 571-270-5002. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. RODERICK . TOLENTINO Examiner Art Unit 2439 /RODERICK TOLENTINO/Primary Examiner, Art Unit 2439
Read full office action

Prosecution Timeline

Jul 19, 2024
Application Filed
Sep 18, 2025
Non-Final Rejection — §103
Dec 18, 2025
Response Filed
Jan 23, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603907
SERVER AND METHOD FOR PROVIDING ONLINE THREAT DATA BASED ON USER-CUSTOMIZED KEYWORDS FOR PRIVATE CHANNEL
2y 5m to grant Granted Apr 14, 2026
Patent 12592915
INFERENCE-BASED SELECTIVE FLOW INSPECTION
2y 5m to grant Granted Mar 31, 2026
Patent 12580946
SYSTEMS AND METHODS FOR TRIGGERING TOKEN ALERTS
2y 5m to grant Granted Mar 17, 2026
Patent 12580948
CYBERSECURITY OPERATIONS MITIGATION MANAGEMENT
2y 5m to grant Granted Mar 17, 2026
Patent 12572632
SYSTEMS AND METHODS FOR DATA SECURITY MODEL MODIFICATION AND ANOMALY DETECTION
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
77%
Grant Probability
99%
With Interview (+35.4%)
3y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 705 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month