Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
The response of 02/27/26 was received and considered. Claims 3 and 11 are canceled. Claims 1-2, 4-10, 12-22 are pending.
Response to Arguments
In view of Applicant’s arguments and amendments, filed 02/27/26, with respect to the rejection of claims 1-20 under 35 USC 102 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground of rejection is made in view of 35 USC 103 and 35 USC 101.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-2, 4-10, 12-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract idea without significantly more.
As per claims 1 and 9:
Step 1: Statutory Category: Claim 1 is directed towards a method (process) and Claim 9 is directed towards a system (machine).
Step 2A, Prong 1: Does it recite an abstract idea? Yes, the core steps of claims 1 and 9 involve: collecting data (client data, policy guardrails), formatting that data (generating a prompt), analyzing the data using an algorithm/AI (providing to a model), and receiving and providing a result (the network policy). The claim essentially recites collecting information (client data, guardrails), analyzing it using an AI model (generating a prompt and providing it to a fine-tuned model), and outputting a result (providing a network policy). The courts consistently held collecting information, analyzing it, and presenting the results is considered an abstract idea (often classified as a "mental process" or "certain methods of organizing human activity").
Step 2A, Prong 2: Is it integrated into a practical application? To be integrated into a practical application, the claim must improve the functioning of a computer or apply to a specific technology in a non-conventional way. As per "providing the network policy to a network access interface of the user device." It stops at "providing." It does not explicitly state that the network interface is actually configured by this policy, nor does it describe how the network's physical or operational state is altered to improve security, routing, or bandwidth. Simply providing data to a component is often viewed as "insignificant post-solution activity."
Step 2B: Is there an inventive concept? The claim elements (user device, fine-tuned model, pre-trained model) are recited at a high level of generality. Utilizing a generic "fine-tuned model" to process text/data is considered well-understood, routine, and conventional in the current state of the art. "User device" and "network access interface" are generic computer components invoked merely to perform their standard functions. The USPTO views generic recitations of AI (e.g., "generating a prompt," "pre-trained model," "fine-tuned model") as well-understood, routine, and conventional. The claim does not recite how the fine-tuned model is uniquely structured, how the specific training process occurs, or a unique mathematical algorithm. It merely claims the use of an existing AI paradigm (prompting an LLM/model) to solve a known problem.
As per claim 17:
Step 1: Statutory Category: Yes. Process/Method.
Step 2A, Prong 1: Does it recite an abstract idea? Yes, claim 17 focuses on training an AI model using a labeled dataset of prompts and responses, and then deploying it. The USPTO frequently views the mathematical algorithms underlying ML training as abstract ideas. Furthermore, teaching a system to map inputs (device states) to outputs (policies) based on rules is essentially a "mental process."
Step 2A, Prong 2: Is it integrated into a practical application?
Training a model can be eligible if it provides a technical improvement to how computers operate (e.g., a fundamentally new way to optimize memory during training). However, Claim 17 recites highly generic, conventional supervised learning steps: taking a labeled dataset, training a model, deploying it, and running an inference to get a result. The inference step (providing the policy to the device) suffers from the same lack of physical/technical integration as claim 1.
Step 2B: Is there an inventive concept?
The steps of training a pre-trained model for a specific task using labeled datasets are the textbook definition of routine and conventional machine learning practices.
Claims 4, 5, 7, 12, 13, 15, 18, 22: Prompt Formatting & Data Manipulation: Claims that specify how the prompt is formatted (key-value pairs, templates, placeholders, subsets of data) are generally viewed as abstract data manipulation or formatting. They do not add technical character. Formatting, organizing, and manipulating data strings as inherently abstract mathematical or mental processes, or merely routine computer functions. Using key-value pairs, filling out digital templates, and updating variables are fundamental programming concepts. They do not represent a technological improvement to how the computer or network operates; they are simply instructions on how to format the text being sent to the AI model.
Claim 8 and 16: Data Content/Types: Specifying the types of network policies (prioritization, allocation, security) just limits the abstract idea to a specific field of use, which does not overcome a § 101 rejection.
Claims 2, 6, 10, 14, 21: Feedback Loops: Updating a model or policy based on feedback or updated client data is a standard data collection and analysis loop.
Claims 19, 20: Standard ML Routines: Dividing datasets for training/validation, and using predetermined stopping conditions (cost, iteration, accuracy, convergence) are conventional ML steps that do not supply an inventive concept under Step 2B.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 4-10, 12-22 are rejected under 35 U.S.C. 103 as being unpatentable over US 2024/0039916 to McPeak et al, and further in view of US 2025/0193244 to Singh et al, US 2025/0193244.
Regarding claim 1, McPeak teaches a computer-implemented method for network policy generation, the method comprising:
determining first client data associated with a user device, the first client data comprising device information, network information, application information, and connection information (0044: data from a profile of a user that selects the blueprint may form part of the training data, including information about the user and information about activities of the user (e.g., blueprint options that the user did or did not select and context information surrounding each of those selections, such as information about other resources created by the user within a time interval of making that selection). Profile data may include any other information known about a user or other entity, such as characteristics of the user/users));
determining an organizational policy guardrail associated with the user device (0018: Policy enforcement service 130 is used by client devices (e.g., operations device 112 and/or security device 113) to generate guardrails. The term guardrail, as used herein, may refer to properties of resources that are to be adhered to by developers. The guardrails may be specific to types of resources—that is, databases having sensitive information is one type of resource, and databases accessible to certain geographies is another type of resource. The guardrails are defined by client devices having permissions to define constraints for given types of resources.);
generating a first prompt based on the first client data, the first prompt comprising the organizational policy guardrail (0028: owner determination module may prompt the owner to confirm that that user is indeed the owner. In another embodiment, owner determination module 208 may simply conclude that this person is the owner. 0029: ownership determination module 208 accesses a machine learning model trained to identify an owner of the file);
providing the first prompt to a task-specific fine-tuned model based on a pre-trained model (0031: After the owner is determined, configuration recommendation module 210 prompts the owner with a set of recommended configuration changes, which may be determined based on a comparison of configuration settings of the pre-existing resource to the policy constraints. Fig, 9, step 960);
McPeak lacks or does not expressly disclose
receiving, from the fine-tuned model and responsive to providing the first prompt, a network policy constrained by the organizational policy guardrail, the network policy specifying a network access configuration associated with an application executing on the user device.
However, Singh teaches
receiving, from the fine-tuned model and responsive to providing the first prompt, a network policy constrained by the organizational policy guardrail, the network policy specifying a network access configuration associated with an application executing on the user device (0046: FIG. 2 illustrates a schematic of system 200 for using natural language input to set security policies for a network such as an enterprise network (e.g., Enterprise Network 102, FIG. 1). A Security Administrator 112 can send a Natural Language Security Policy Request 202 to a Security Policy Engine 204. The Security Policy Engine 204 can include a Policy Assistant Service 206, an Intermediate Service 208, an Analytics Engine 210, and a Policy-Bot 212. The Policy Assistant Service 206 can process the Natural Language Security Policy Request 202. The Policy Assistant Service 206 can employ Artificial Intelligence (AI) models (AI Fine-Tuned Models 214) to translate and clarify the Natural Language Security Policy Request 202. The AI Fine-Tuned Models 214 employ a question-and-answer model (Q&A Model 216) and Rule as a Conversation (RaaC) model (RaaC Model 218) to translate, interpret and clarify the Natural Language Security Policy Request 202.).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify McPeak with Singh to teach receiving a network policy specifying a network access configuration, in order to improve network security, as taught by Singh, paragraph 0046.
McPeak, as modified above, further discloses
providing the network policy to a network access interface the user device (Fig. 10, Policy enforcement service 130 may generate for display the user interface, each blueprint comprising a selectable option that, when selected, leads to fields for configuring the resource (e.g., using resource configuration module 810).).
Regarding claim 2, McPeak, as modified above, further teaches the method of claim 1, further comprising: receiving network policy feedback associated with a user of the user device; and generating a user policy guardrail based on the network policy feedback, wherein the first prompt further comprises the user policy guardrail and an instruction to prioritize the organizational policy guardrail over the user policy guardrail (0025: Policy definition module 202 may receive a selection from the user, and may re-train the machine learning model using that feedback. This may result in a different ordering of candidate segments in the future.).
Regarding claim 4, McPeak, as modified above, further teaches the method of claim 1, wherein said generating a first prompt based on the first client data comprises: determining, from the first client data, a parameter of the user device; determining, from the first client data, a value for the parameter of the user device; and generating the first prompt by including the parameter and the value of the parameter as a key and value pair in the first prompt (0017: FIG. 1, policy enforcement service environment 100 includes various client devices, including a developer device 110, operations device 112, and security device 113, as well as network 120, policy enforcement service 130, and generative artificial intelligence (AI) tool 140. While policy enforcement application 111 is only depicted with respect to developer device 110, this is for convenience only, and may exist on any client device.).
Regarding claim 5, McPeak, as modified above, further teaches the method of claim 1, wherein said generating a first prompt based on the first client data comprises: determining, from the first client data, a value for a parameter of the user device; and generating the first prompt by replacing a placeholder in a prompt template with the value (0020: when developer device 110 generates a resource, policy enforcement application 111 forces the resource to have properties that adhere to the defined constraints. Policy enforcement service 130 is instantiated on one or more servers outside of the service of developer device 110, accessible by way of network 120).
Regarding claim 6, McPeak, as modified above, further teaches the method of claim 1, further comprising:
determining updated client data comprising a subset of the first client data that has changed since the generation of the network policy; determining unchanged client data comprising a subset of the first client data that remains unchanged since the generation of the network policy; generating an updated prompt based on the updated client data and the unchanged client data; providing the updated prompt to the fine-tuned model; receiving, responsive to providing the updated prompt, an updated network policy from the fine-tuned model; and providing the updated network policy to the user device (0056: Tags may be mapped to a data structure that defines the guardrails, where the data structure is modifiable by one or more users. Following creation of a blueprint, where the data structure corresponding to tag is modified, that modification applies to the blueprints featuring the tag, thus causing the blueprints to include that update as new resources are created using that blueprint. Fig. 9).
Regarding claim 7, McPeak, as modified above, further teaches the method of claim 6, wherein said generating an updated prompt comprises: retrieving the first prompt; determining, based on the updated client data, updated values associated with the subset of the first client data that has changed; and generating the updated prompt by replacing, in the first prompt, the subset of the first client data that has changed with the updated values (0058: policy enforcement service 130 may determine 930 that the pre-existing resource is of the given type (e.g., using reconciliation module 206) and, responsive to determining that the pre-existing resource is of the given type, may determine 940 that the pre-existing resource does not comply with the policy constraints. Policy enforcement service 130 may determine 950 an owner of the resource based on metadata associated with the resource (e.g., using owner determination module 208), and may prompt 960 the owner with a set of recommended configuration changes (e.g., using configuration recommendation module 210). Responsive to receiving a selection of a selectable option from the owner, policy enforcement service 130 may reconfigure 970 the resource with the recommended configuration changes.).
Regarding claim 8, McPeak, as modified above, further teaches the method of claim 1, wherein the network policy comprises at least one of: a network resource prioritization policy associated with at least one of: a device identifier, an application identifier, a domain identifier, a protocol identifier, or a network identifier; a network resource allocation policy associated with at least one of: a device identifier, an application identifier, a domain identifier, a protocol identifier, or a network identifier; a network access policy associated with at least one of: a device identifier, an application identifier, a domain identifier, a protocol identifier, or a network identifier; or a network security requirement associated with at least one of: a device identifier, an application identifier, a domain identifier, a protocol identifier, or a network identifier (0045: Data of domain (e.g., a domain in which the team operates where there are multiple teams within that domain) may be taken from profiles of the users within that domain in similar fashion, and so on. A classification of the domain may be used as a signal (e.g., a resource is being created for a domain in the information technology space versus the administrative space, information technology and administrative being example classifications). . 0028: Ownership determination module 208 may identify a user identifier within a log of the log source (e.g., a handle or contact address of a candidate owner), and may determine that the owner is the person identified by the user identifier. In an embodiment, owner determination module may prompt the owner to confirm that that user is indeed the owner. In another embodiment, owner determination module 208 may simply conclude that this person is the owner.).
As per claims 9-16, this is a system version of the claimed method discussed above in claims 1-8 wherein all claimed limitations have also been addressed and/or cited as set forth above.
Regarding claim 17, McPeak teaches a method for task-specific fine-tuning of a pre-trained model comprising: determining a labeled training dataset for task-specific fine-tuning of the pre-trained model, the labeled training dataset comprising a prompt comprising a set of key and value pairs indicative of a device state associated with a user device, and a corresponding response comprising a network policy corresponding to the device state; training the pre-trained model based on the labeled training dataset to generate a fine-tuned model; and deploying the fine-tuned model (0057: FIG. 9: Process 900 may be executed by one or more processors (e.g., processor 302 of policy enforcement service 130) executing instructions (e.g., instructions 324). Process 900 may begin with policy enforcement service 130 receiving 910, by way of a policy enforcement application (e.g., policy enforcement application 111), input specifying policy constraints for resources of a given type (e.g., as performed using policy definition module 202). Policy enforcement service 130 may import 920 a pre-existing resource into the policy enforcement application (e.g., using resource importation module 204).)
Regarding claim 18, McPeak, as modified above, further teaches the method of claim 17, wherein said determining a labeled training dataset comprises: determining a prompt template comprising first placeholders associated with user device parameters; determining a response template comprising second placeholders associated with network policy parameters; and automatically generating training samples for the labeled training dataset by replacing the first placeholders with parameter values indicative of a sample state of device, and the second placeholders with network policy values corresponding to the sample state (0024: Policy definition module 202 may train a machine learning model using training examples to generate recommendations. The training examples may be specific to a user based on prior policies created by the user, or may be specific to a group of users (e.g., training examples across a team, department, or conglomerate may be used). The training examples may include collections of segments as labeled by a resource and/or resource type and/or resource attribute.).
Regarding claims 19 and 22, McPeak, as modified above, further teaches the method of claim 17, wherein said training the pre-trained model comprises: dividing the labeled training dataset into a first dataset and a second dataset; iteratively training the pre-trained model based on the first dataset to generate an intermediate model; validating the intermediate model using the second dataset; determining, based on said validating, that the intermediate model satisfies a predetermined training condition; and concluding training of the pre-trained model ([0030] The machine learning model may be trained by generating embeddings for different segments of code and metadata within a resource. For example, lines of code and metadata that include a user identifier may be converted into a semantic representation in latent space using a supervised machine learning model. Owner determination module 208 may then use an unsupervised machine learning model to determine the distance in latent space between one or more example owner embedding representations and each semantic representation. Where a distance is below a threshold, owner determination module 208 may determine that the user identifier within the corresponding text to the latent representation is the owner.).
McPeak lacks or does not expressly disclose
receiving, from the fine-tuned model and responsive to providing the first prompt, a network policy constrained by the organizational policy guardrail, the network policy specifying a network access configuration associated with an application executing on the user device.
However, Singh teaches
receiving, from the fine-tuned model and responsive to providing the first prompt, a network policy constrained by the organizational policy guardrail, the network policy specifying a network access configuration associated with an application executing on the user device (0046: FIG. 2 illustrates a schematic of system 200 for using natural language input to set security policies for a network such as an enterprise network (e.g., Enterprise Network 102, FIG. 1). A Security Administrator 112 can send a Natural Language Security Policy Request 202 to a Security Policy Engine 204. The Security Policy Engine 204 can include a Policy Assistant Service 206, an Intermediate Service 208, an Analytics Engine 210, and a Policy-Bot 212. The Policy Assistant Service 206 can process the Natural Language Security Policy Request 202. The Policy Assistant Service 206 can employ Artificial Intelligence (AI) models (AI Fine-Tuned Models 214) to translate and clarify the Natural Language Security Policy Request 202. The AI Fine-Tuned Models 214 employ a question-and-answer model (Q&A Model 216) and Rule as a Conversation (RaaC) model (RaaC Model 218) to translate, interpret and clarify the Natural Language Security Policy Request 202.).
It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to modify McPeak with Singh to teach receiving a network policy specifying a network access configuration, in order to improve network security, as taught by Singh, paragraph 0046.
McPeak, as modified above, further discloses
providing the network policy to a network access interface the user device (Fig. 10, Policy enforcement service 130 may generate for display the user interface, each blueprint comprising a selectable option that, when selected, leads to fields for configuring the resource (e.g., using resource configuration module 810).);
McPeak, as modified above, further discloses
providing the network policy to a network access interface the user device (Fig. 10, Policy enforcement service 130 may generate for display the user interface, each blueprint comprising a selectable option that, when selected, leads to fields for configuring the resource (e.g., using resource configuration module 810).).
Regarding claim 20, McPeak, as modified above, further teaches the method of claim 19, wherein the predetermined training condition comprises at least one of: a temporal condition; a cost condition; an iteration condition; an accuracy condition; an error condition; or a convergence condition (0024: Policy definition module 202 may display candidate segments in a menu, list, or other navigable tool for selection, where users may select from functional blocks including conditions (e.g., “if” statements), as well as requirements (e.g., what to do where conditions are met). 0069: permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.).
Regarding claim 21, McPeak, as modified above, further teaches the method of claim 19, further comprising: receiving, as part of the first prompt, a user policy guardrail and an instruction to prioritize the organizational policy guardrail over the user policy guardrail (0057: Process 900 may be executed by one or more processors (e.g., processor 302 of policy enforcement service 130) executing instructions (e.g., instructions 324). Process 900 may begin with policy enforcement service 130 receiving 910, by way of a policy enforcement application (e.g., policy enforcement application 111), input specifying policy constraints for resources of a given type (e.g., as performed using policy definition module 202). Policy enforcement service 130 may import 920 a pre-existing resource into the policy enforcement application (e.g., using resource importation module 204).).
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AUBREY H WYSZYNSKI whose telephone number is (571)272-8155. The examiner can normally be reached M-F 9-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ALI SHAYANFAR can be reached at 571-270-1050. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AUBREY H WYSZYNSKI/Primary Examiner, Art Unit 2434