DETAILED ACTION
The following is a Final Office action. In response to Examiner’s communication of 12/12/25, Applicant, on 2/17/2026, amended claims 1-5, 9-13, 15, 17, and 19-20. Claims 1-20 are now pending and have been rejected as indicated below.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
Applicant’s amendments are acknowledged.
The 35 USC 101 rejections of claims 1-20 regarding abstract ideas are still applied in light of Applicant’s amendments and explanations.
Revised 35 USC 103 rejection of claims 1-20 are applied in light of Applicant’s amendments and explanations.
Claim Rejections - 35 USC§ 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Here, under considerations of the broadest reasonable interpretation of the claimed invention, Examiner finds that the Applicant invented a method and system for generating prompts for input into an AI tool, determining validity of the output, and preparing a second prompt for input into the AI tool. Examiner formulates an abstract idea analysis, following the framework described in the MPEP as follows:
Step 1: The claims are directed to a statutory category, namely a "method" (claims 9-16) and "system" (claims 1-8, 17-20).
Step 2A - Prong 1: The claims are found to recite limitations that set forth the abstract idea(s), namely, regarding claim 1:
receiving a query to identify risks associated with the project;
generating… the query embedding from the query
retrieving… using a retrieval augmented generation (RAG) pattern, data segments similar to the query from data related to the project;
generating a prompt for transmission to a generative artificial intelligence (AI) tool, the prompt including the data segments;
providing the one or more identified risks and the one or more recommended actions for mitigating the at least one of the one or more identified risks to a review AI agent for validating at least one of the one or more identified risks and the one or more recommended actions;
in response to a threshold number of the one or more identified risks or the one or more recommended actions being invalidated, utilizing a user agent
to generate a revised query for including in a revised prompt to the generative AI tool, the revised query
identifying at least one of the invalidated risks or invalidated recommended actions
generating the revised prompt for transmission to the generative AI tool
Independent claims 9 and 17 recite substantially similar claim language.
Dependent claims 2-8, 10-16, and 18-20 recite the same or similar abstract idea(s) as independent claims 1, 9, and 17 with merely a further narrowing of the abstract idea(s) to particular data characterization and/or additional data analyses performed as part of the abstract idea.
The limitations in claims 1-20 above falling well-within the groupings of subject matter identified by the courts as being abstract concepts, specifically the claims are found to correspond to the category of:
"Certain methods of organizing human activity- fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions)" as the limitations identified above are directed to generating prompts for input into an AI tool, determining validity of the output, and preparing a second prompt for input into the AI tool and thus is a method of organizing human activity including at least commercial or business interactions or relations and/or a management of user personal behavior; and/or
"Mental processes - concepts performed in the human mind (including an observation, evaluation, judgement, opinion)" as the limitations identified above include mere data observations, evaluations, judgements, and/or opinions, e.g. including user observation and evaluation of data relating to generating prompts for input into an AI tool, determining validity of the output, and preparing a second prompt for input into the AI tool, which is capable of being performed mentally and/or using pen and paper.
Step 2A - Prong 2: Claims 1-11 and 18-26 are found to clearly be directed to the abstract idea identified above because the claims, as a whole, fail to integrate the claimed judicial exception into a practical application, specifically the claims recite the additional elements of:
"providing the revised output for display to a user" (claims 1, 9, and 17) "wherein the output or the revised output is provided for display in a dashboard for the project" (claim 8), however the aforementioned elements directed to the receiving of user input/selection of data to view via a dashboard and displaying corresponding data via the dashboard merely amount to generic GUI elements of a general purpose computer used to "apply" the abstract idea (MPEP 2106.05(f)) and/or is merely an attempt at limiting the abstract idea of analysis and review/visualization of data related to a particular field of use/technological environment of a GUI dashboard (MPEP 2106.05(h)) and therefore the GUI dashboard input and display of data fails to integrate the abstract idea into a practical application;
"A data processing system for identifying one or more risks associated with a project, the data processing system comprising: a processor; and a memory in communication with the processor, the memory comprising executable instructions that, when executed by the processor alone or in combination with other processors, cause the data processing system to perform functions of: … generating, by an embedding engine… retrieving, by a comparing engine / A non-transitory computer readable medium on which are stored instructions that, when executed, cause a programmable device to perform functions of:" (claims 1, 9, and 17), “wherein generative AI tool is a large language model,” (claim 10), “wherein the request is received via a project management application or service,” (claim 18), “wherein the request is received via a copilot application or service,” (claim 19), however the aforementioned elements merely amount to generic components of a general purpose computer used to "apply" the abstract idea (MPEP 2106.0S(f)) and thus fails to integrate the recited abstract idea into a practical application, furthermore the high-level recitation of receiving data from a generic "processing system" is at most an attempt to limit the abstract to a particular field of use (MPEP 2106.0S(h), e.g.: "For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017) (limiting use of abstract idea to use with XML tags).") and/or merely insignificant extra-solution activity (MPE 2106.05(g)) and thus further fails to integrate the abstract idea into a practical application;
"transmitting the prompt to the generative AI tool… receiving from the generative AI tool one or more identified risks for the project and one or more recommended actions for mitigating at least one of the identified risks… transmitting the revised prompt to the generative AI tool… receiving from the generative AI tool a revised output includes one or more revised identified risks or one or more revised recommended actions for mitigating the one or more revised identified risks" (claims 1, 9, and 17), however the receiving of data from these various sources is merely insignificant extra-solution activity, e.g. data gathering, and/or merely an attempt at limiting the abstract idea to a particular field of use and thus fails to integrate the recited abstract idea into a practical application (e.g. MPEP 2106.0S(h): "Examiners should keep in mind that this consideration overlaps with other considerations, particularly insignificant extra-solution activity (see MPEP § 2106.05{g)). For instance, a data gathering step that is limited to a particular data source (such as the Internet) or a particular type of data (such as power grid data or XML tags) could be considered to be both insignificant extra-solution activity and a field of use limitation. See, e.g., Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (limiting use of abstract idea to the Internet); Electric Power, 830 F.3d at 1354, 119 USPQ2d at 1742 (limiting application of abstract idea to power grid data); Intellectual Ventures I LLC v. Erie lndem. Co., 850 F.3d 1315, 1328-29, 121 USPQ2d 1928, 1939 (Fed. Cir. 2017} (limiting use of abstract idea to use with XML tags).");
Step 2B: Claims 1-20 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the additional elements as described above with respect to Step 2A Prong 2 merely amount to a general purpose computer that attempts to apply the abstract idea in a technological environment (MPEP 2106.0S(f)), including merely limiting the abstract idea to a particular field of use of KPI analysis of a "processing system" via a GUI "display", as explained above, and/or performs insignificant extra-solution activity, e.g. data gathering or output, (MPEP 2106.0S(g)), as identified above, which is further found under step 2B to be merely well-understood, routine, and conventional activities as evidenced by MPEP 2106.0S(d)(II) (describing conventional activities that include transmitting and receiving data over a network, electronic recordkeeping, storing and retrieving information from memory, electronically scanning or extracting data from a physical document, and a web browser's back and forward button functionality). Therefore, similarly the combination and arrangement of the above identified additional elements when analyzed under Step 2B also fails to necessitate a conclusion that the claims amount to significantly more than the abstract idea directed to generating prompts for input into an AI tool, determining validity of the output, and preparing a second prompt for input into the AI tool.
Claims 1-20 are accordingly rejected under 35 USC§ 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea(s)) without significantly more.
Note: The analysis above applies to all statutory categories of invention. As such, the presentment of any claim otherwise styled as a machine or manufacture, for example, would be subject to the same analysis
For further authority and guidance, see:
MPEP § 2106
https://www.uspto.gov/patents/laws/examination-policy/subject-matter-eligibility
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication Number 2016/0283875 to Birdi (hereafter referred to as Birdi) in view of U.S. Patent Application Publication Number 2025/0173555 to Luus et al. (hereafter referred to as Luus) in further view of U.S. Patent Application Publication Number 2025/0238745 to Sarkar (hereafter referred to as Sarkar) in even further view of U.S. Patent Application Publication Number 2021/0241231 to Mullins et al. (hereafter referred to as Mullins) and in even further view of U.S. Patent Application Publication Number 2025/0298792 to Tongaonkar et al. (hereafter referred to as Tongaonkar).
As per claim 1, Birdi teaches:
A data processing system for identifying one or more risks associated with a project, the data processing system comprising: a processor; and a memory in communication with the processor, the memory comprising executable instructions that, when executed by the processor alone or in combination with other processors, cause the data processing system to perform functions of: (Paragraph Number [0069] teaches a computing system 1100 that may be used to implement the present technology. System 1100 of FIG. 11 may be used to implement a computing device, server (i.e. application server, network server), and data repositories (i.e. knowledge base) in the context of the system of FIG. 2. The computing system 1100 of FIG. 11 includes one or more processors 1110 and memory 1120. Main memory 1120 stores, in part, instructions and data for execution by processor 1110. Main memory 1120 can store the executable code when in operation. Main memory 1120 may also include a repository such as the repository illustrated in FIG. 1. The system 1100 of FIG. 11 further includes a mass storage device 1130, portable storage medium drive(s) 1140, output devices 1150, user input devices 1160, a graphics display 1170, and peripheral devices 1180).
receiving a query to identify risks associated with the project (Paragraph Number [0031] teaches using a computing device, for example, user 120 may subscribe (e.g., create an account) or register with the risk management system 200. Once the user 120 has registered with the risk management 200, the user 120 may perform a login (i.e., access account) and may access the risk management system 200 to provide customer data. In one embodiment, user 120 is an analyst or consultant representing or otherwise working for customer 110 in association with a particular program or project to create a risk management report for customer 110. User 120 may also be a stakeholder. Paragraph Number [0044] teaches risk analysis tool 220 is a rules engine that compares the current state or conditions of a system/business process implementation to one or more future conditions to identify gaps and risks in implementing projects to stakeholder expectations and/or operational readiness. This is used to guide a user as they track the progress of the system. Risk analysis 220 may further compare the current and future conditions to other considerations stored in knowledge base 210 such as industry standards 210C, SOPs, regulatory driers 210D, SOPs 210E, industry best practices 210F, stakeholder data 210H, and other lessons learned and best 210J. practices to generate the risk management report).
receiving from the generative AI tool one or more identified risks for the project and one or more recommended actions for mitigating at least one of the identified risks (Paragraph Number [0044] teaches risk analysis tool 220 is a rules engine that compares the current state or conditions of a system/business process implementation to one or more future conditions to identify gaps and risks in implementing projects to stakeholder expectations and/or operational readiness. This is used to guide a user as they track the progress of the system. Risk analysis 220 may further compare the current and future conditions to other considerations stored in knowledge base 210 such as industry standards 210C, SOPs, regulatory driers 210D, SOPs 210E, industry best practices 210F, stakeholder data 210H, and other lessons learned and best 210J. practices to generate the risk management report. Paragraph Number [0050] teaches quality management tool 270 recommends a mitigation strategy for the gap(s) and risk(s) identified by risk analysis tool 220. Quality management tool 270 may further identify one or more tasks, actions, or activities that are required to mitigate the risks. Quality management tool 270 may also recommend which person, party, department, agency, etc. is responsible for completing such tasks, actions, or activities. Quality management tool 270 may also create and recommend budget requirements to execute the one or more recommended mitigation strategies).
receiving from the generative AI tool a revised output includes one or more revised identified risks or one or more revised recommended actions for mitigating the one or more revised identified risks (Paragraph Number [0048] teaches a change may occur before or after a risk management report is created by risk analysis tool 220. After a change in the program or project is tracked and logged by change management tool 230, a revised risk management report may be created by risk analysis tool 220. In one embodiment, the revised risk management report is created after approval or sign-off is received at the risk management system 200 from a relevant stakeholder or user 120 affected by the change. Paragraph Number [0050] teaches quality management tool 270 recommends a mitigation strategy for the gap(s) and risk(s) identified by risk analysis tool 220. Quality management tool 270 may further identify one or more tasks, actions, or activities that are required to mitigate the risks. Quality management tool 270 may also recommend which person, party, department, agency, etc. is responsible for completing such tasks, actions, or activities. Quality management tool 270 may also create and recommend budget requirements to execute the one or more recommended mitigation strategies).
providing the revised output for display to a user (Paragraph Number [0057] teaches interface 500 may also include filter criteria 570. For example, the filter criteria of “Area” in filter criteria 570 allows user 120 to select which data should be displayed in the main grid 510. Paragraph Number [0058] teaches an interface for displaying details associated with a Standard Operating Procedure (SOP). Interface 600 shows details regarding the “Perimeter Intrusion” SOP shown in main frame 510. Selecting the “New” 610 button allows a blank screen to open so that user 120 may enter and/or save a new SOP. Moving from interface 500 to interface 600 is consistent with risk management system's 200 ability to link intangible SOPs with tangible parts, systems, facilities of the project that are being built, constructed, designed, or implemented. The SOP level information in interface 600 includes standards 620, regulations 630 and best practices 640 while the activity level information 650 (one SOP may have multiple activities) is related to the rest of the boxes in interface 600 for SFOR elements, systems, facilities/infrastructure, stakeholders. Thus, selecting a particular activity will generate one or more records to populate the boxes for SFOR elements, systems, facilities/infrastructure, stakeholders. Interface 600 allows user 120 to add one or more activities. For each activity, user 120 may enter relevant data related to SFOR elements, systems, facilities and stakeholders. Interface 600 allows for a work flow diagram that captures SOP information as it relates to systems, facilities, operations, stakeholders, etc. related to the project. Information entered into interface 600 may be stored in 210).
Birdi teaches identifying risk associated with a project by receiving data regarding a project and analyzing the data but does not explicitly teach inputting the data into an AI tool in the form of a prompt and iterating on the results which is taught by the following citations from Luus:
generating a prompt for transmission to a generative artificial intelligence (AI) tool, the prompt including the data segments (Paragraph Number [0026] teaches the selected generative model 126b is a generative AL model trained to generate content (e.g., textual, spreadsheet, chart, report, audio, image, video, and the like.) in response to natural language prompts input by a user via the native application 114 or via the web. The selected generative model 126b is implemented using a large language model (LLM) in some implementations. Examples of such models include but are not limited to a Generative Pre-trained Transformer 3 (GPT-3), or GPT-4 model. Other implementations may utilize other models or other generative models to set up a statistical test according to the presentation style/format of the user. Paragraph Number [0031] teaches the prompt construction unit 124 then uses parameters/metrics associated with the statistical test obtained from the expert knowledge database(s) 134 to generate meta-prompts for parameter values. The prompt construction unit 124 parses and filters the parameter metrics provided to extract relevant definitions and possible values of the parameters and to generate prompts in a format that can be included in the prompt to the selected generative model 126b. Additional details of a selected calculation tool, e.g., a statistical analysis tool 132a are shown in FIG. 3, which is described in detail in the examples which follow).
transmitting the prompt to the generative AI tool (Paragraph Number [0027] teaches the request processing unit 122 receives a user request to set up and execute a statistical test from the native application 114 or the browser application 112. For example, the user request is a natural language prompt input by the user as is then passed on to the prompt construction unit 124. The natural language prompt requests to set up and execute a statistical test and identify the user submitting the natural language prompt. The natural language prompt may imply or indicate that the user would like to have the statistical test set up and executed by a generative model (e.g., a general generative model 126a in a generative model zoo 126). For example, the user request is expressed in a user prompt: “help me set up a statistical test,” or “I want to use ChatGPT to set up a statistical test.” Paragraph Number [0031] teaches the prompt construction unit 124 then uses parameters/metrics associated with the statistical test obtained from the expert knowledge database(s) 134 to generate meta-prompts for parameter values. The prompt construction unit 124 parses and filters the parameter metrics provided to extract relevant definitions and possible values of the parameters and to generate prompts in a format that can be included in the prompt to the selected generative model 126b. Additional details of a selected calculation tool, e.g., a statistical analysis tool 132a are shown in FIG. 3, which is described in detail in the examples which follow).
to generate a revised query for including in a revised prompt to the generative AI tool, the revised query (Paragraph Number [0026] teaches the selected generative model 126b is a generative AL model trained to generate content (e.g., textual, spreadsheet, chart, report, audio, image, video, and the like.) in response to natural language prompts input by a user via the native application 114 or via the web. The selected generative model 126b is implemented using a large language model (LLM) in some implementations. Examples of such models include but are not limited to a Generative Pre-trained Transformer 3 (GPT-3), or GPT-4 model. Other implementations may utilize other models or other generative models to set up a statistical test according to the presentation style/format of the user. Paragraph Number [0031] teaches the prompt construction unit 124 then uses parameters/metrics associated with the statistical test obtained from the expert knowledge database(s) 134 to generate meta-prompts for parameter values. The prompt construction unit 124 parses and filters the parameter metrics provided to extract relevant definitions and possible values of the parameters and to generate prompts in a format that can be included in the prompt to the selected generative model 126b. Additional details of a selected calculation tool, e.g., a statistical analysis tool 132a are shown in FIG. 3, which is described in detail in the examples which follow. Paragraph Number [0035] teaches additional details of the prompt construction unit 124 are shown in FIGS. 3-4, which is discussed in detail in the examples which follow. The prompt construction unit 124 may reformat or otherwise standardize the information to be included in the prompt to a standardized format that is recognized by the selected generative model 126b. The selected generative model 126b is trained using training data in this standardized format, in some implementations, and utilizing this format for the prompts provided to the selected generative model 126b may improve the predictions provided by the selected generative model 126b. In another embodiment, the prompt construction unit 124 can determine based on the expert knowledge data 134a that there are other models better fitting the statistical test, and update the generative model accordingly).
generating the revised prompt for transmission to the generative AI tool (Paragraph Number [0026] teaches the selected generative model 126b is a generative AL model trained to generate content (e.g., textual, spreadsheet, chart, report, audio, image, video, and the like.) in response to natural language prompts input by a user via the native application 114 or via the web. The selected generative model 126b is implemented using a large language model (LLM) in some implementations. Examples of such models include but are not limited to a Generative Pre-trained Transformer 3 (GPT-3), or GPT-4 model. Other implementations may utilize other models or other generative models to set up a statistical test according to the presentation style/format of the user. Paragraph Number [0031] teaches the prompt construction unit 124 then uses parameters/metrics associated with the statistical test obtained from the expert knowledge database(s) 134 to generate meta-prompts for parameter values. The prompt construction unit 124 parses and filters the parameter metrics provided to extract relevant definitions and possible values of the parameters and to generate prompts in a format that can be included in the prompt to the selected generative model 126b. Additional details of a selected calculation tool, e.g., a statistical analysis tool 132a are shown in FIG. 3, which is described in detail in the examples which follow. Paragraph Number [0035] teaches additional details of the prompt construction unit 124 are shown in FIGS. 3-4, which is discussed in detail in the examples which follow. The prompt construction unit 124 may reformat or otherwise standardize the information to be included in the prompt to a standardized format that is recognized by the selected generative model 126b. The selected generative model 126b is trained using training data in this standardized format, in some implementations, and utilizing this format for the prompts provided to the selected generative model 126b may improve the predictions provided by the selected generative model 126b. In another embodiment, the prompt construction unit 124 can determine based on the expert knowledge data 134a that there are other models better fitting the statistical test, and update the generative model accordingly).
transmitting the revised prompt to the generative AI tool (Paragraph Number [0027] teaches the request processing unit 122 receives a user request to set up and execute a statistical test from the native application 114 or the browser application 112. For example, the user request is a natural language prompt input by the user as is then passed on to the prompt construction unit 124. The natural language prompt requests to set up and execute a statistical test and identify the user submitting the natural language prompt. The natural language prompt may imply or indicate that the user would like to have the statistical test set up and executed by a generative model (e.g., a general generative model 126a in a generative model zoo 126). For example, the user request is expressed in a user prompt: “help me set up a statistical test,” or “I want to use ChatGPT to set up a statistical test.” Paragraph Number [0031] teaches the prompt construction unit 124 then uses parameters/metrics associated with the statistical test obtained from the expert knowledge database(s) 134 to generate meta-prompts for parameter values. The prompt construction unit 124 parses and filters the parameter metrics provided to extract relevant definitions and possible values of the parameters and to generate prompts in a format that can be included in the prompt to the selected generative model 126b. Additional details of a selected calculation tool, e.g., a statistical analysis tool 132a are shown in FIG. 3, which is described in detail in the examples which follow).
Both Birdi and Luus are directed to risk management. Birdi discloses identifying risk associated with a project by receiving data regarding a project and analyzing the data. Luus improves upon Birdi by disclosing inputting the data into an AI tool in the form of a prompt and iterating on the results. One of ordinary skill in the art would be motivated to further include inputting the data into an AI tool in the form of a prompt and iterating on the results, to efficiently apply the analysis tool of machine learning to analyze the data and provide for recommendations and conclusions.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of identifying risk associated with a project by receiving data regarding a project and analyzing the data in Birdi to further input the data into an AI tool in the form of a prompt and iterating on the results as disclosed in Luus, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Birdi teaches identifying risk associated with a project by receiving data regarding a project and analyzing the data but does not explicitly teach providing recommendations for mitigating risks including validating which actions are possible or which risks do not meet a threshold which is taught by the following citations from Sarkar:
providing the one or more identified risks and the one or more recommended actions for mitigating the at least one of the one or more identified risks to a review AI agent for validating at least one of the one or more identified risks and the one or more recommended actions (Paragraph Number [0174] teaches more specifically, in step 1702, process 1700 can provide and obtain results of a readiness questionnaire. In step 1704, process 1700 can extract data related to, inter alia: control, severity, cumulations, USD exposure range, etc. In step 1706, process 1700 expands and creates a dataset (e.g. data set obtained from readiness questionnaires, etc.). In step 1708, process 1700 can validate the dataset and apply one or more AI/ML techniques for predictions of valuation of risk exposure. In step 1710, process 1700 can provide UI options for depiction. In step 1712, process 1700 can apply integration and testing operations. In step 1714, process 1700 implements deployment operations. Paragraph Number [0176] teaches process 1800 determines the size and industry of the company and identifies risk value systems. In step 1804, process 1800 performs effort calculations based on heuristic data. This data is sent to step 1806, that expands and creates a dataset. In step 1808, process 1800 matches a value distribution to one or more trained patterns. In step 1810, process 1800 can provide UI options for depiction. In step 1812, process 1800 can apply integration and testing operations. In step 1814, process 1800 implements deployment operations).
in response to a threshold number of the one or more identified risks or the one or more recommended actions being invalidated, utilizing a user agent (Paragraph Numbers [0177]-[0179] teach an example process 1900 for anomaly detection in risk values, according to some embodiments. Hardware risk information system 1200 can use trend analysis and detection of risk values by using AI/ML algorithms to predict the risk values for the future months. A drastic difference may lead to alerts triggered in the system. More specifically, in step 1902, process 1900 builds a repository of existing patterns. In step 1904, process 1900 detects the seasonality, trends, and residue from the repository. This step can also detect anomalies. In step 1906, process 1900 trains an AI topology with the output patterns and detected anomalies of step 1904. In step 1908, process 1900 validates the dataset and applies AI/ML techniques. In step 1910, process 1900 applies UI options for depiction of output of previous steps. In step 1912, process 1900 implements integration and testing using the AI/ML techniques. In step 1914, process 1900 performs deployment operations. Paragraph Numbers [0207]-[0208] teach the CKCS 3604 uses a primary framework within its object model as the pivotal key for all other controls and sets the stage for use of a common control framework. As shown below, the left porting of the overall Risk/Maturity methodology of the CKCS 3604 set up the formal structure for all-inclusive control frameworks to be aligned and conjoined with the Control Validation Set. Akin to the CKCS 3604 section is the Control Validation Set section which brings the computational process of aligning ingested data with associated control metrics. This completes the CCM 3600 representing a single framework engine only missing its data for the complexity the world brings).
identifying at least one of the invalidated risks or invalidated recommended actions (Paragraph Numbers [0177]-[0179] teach an example process 1900 for anomaly detection in risk values, according to some embodiments. Hardware risk information system 1200 can use trend analysis and detection of risk values by using AI/ML algorithms to predict the risk values for the future months. A drastic difference may lead to alerts triggered in the system. More specifically, in step 1902, process 1900 builds a repository of existing patterns. In step 1904, process 1900 detects the seasonality, trends, and residue from the repository. This step can also detect anomalies. In step 1906, process 1900 trains an AI topology with the output patterns and detected anomalies of step 1904. In step 1908, process 1900 validates the dataset and applies AI/ML techniques. In step 1910, process 1900 applies UI options for depiction of output of previous steps. In step 1912, process 1900 implements integration and testing using the AI/ML techniques. In step 1914, process 1900 performs deployment operations. Paragraph Numbers [0207]-[0208] teach the CKCS 3604 uses a primary framework within its object model as the pivotal key for all other controls and sets the stage for use of a common control framework. As shown below, the left porting of the overall Risk/Maturity methodology of the CKCS 3604 set up the formal structure for all-inclusive control frameworks to be aligned and conjoined with the Control Validation Set. Akin to the CKCS 3604 section is the Control Validation Set section which brings the computational process of aligning ingested data with associated control metrics. This completes the CCM 3600 representing a single framework engine only missing its data for the complexity the world brings).
Both the combination of Birdi and Luus and Sarkar are directed to risk management. The combination of Birdi and Luus discloses identifying risk associated with a project by receiving data regarding a project and analyzing the data. Sarkar improves upon the combination of Birdi and Luus by disclosing providing recommendations for mitigating risks including validating which actions are possible or which risks do not meet a threshold. One of ordinary skill in the art would be motivated to further include providing recommendations for mitigating risks including validating which actions are possible or which risks do not meet a threshold, to efficiently resolve the potential risks by determining if the risk is severe enough to mediate and to determine proper mediation actions if the risk is severe.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of identifying risk associated with a project by receiving data regarding a project and analyzing the data in the combination of Birdi and Luus to further utilize providing recommendations for mitigating risks including validating which actions are possible or which risks do not meet a threshold as disclosed in Sarkar, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Birdi teaches identifying risk associated with a project by receiving data regarding a project and analyzing the data but does not explicitly teach determining user and task embeddings by receiving data regarding a project and assigning a project task by matching users to tasks which is taught by the following citations from Mullins:
generating, by an embedding engine, the query embedding from the query (Paragraph Number [0052] teaches the collaborative project comprises a risk assessment, and the risk assessment descriptions are organized into sections and the names of the sections, such as a section for IT Security (section (1)) (or other structured document portions of a collaborative project description can also be leveraged), are optionally leveraged by pre-pending the section title to the question text before the task context vectorization module 220 of FIG. 2 performs an embedding, as would be apparent to a person of ordinary skill in the art, based on the present disclosure. Paragraph Number [0055] teaches the context of a given user can be obtained from, for example, one or more of: (i) a knowledge of the given user, (ii) skills of the given user, (iii) one or more credentials of the given user, (iv) a social media profile of the given user, (v) a resume of the given user, (vi) a biography of the given user, (vii) an employment history of the given user, (viii) an education history of the given user, and (ix) a job title of the given user (collectively, referred to herein as capabilities of a user). The context of the given user can then be embedded into a vector space by the user context vectorization module 230 and the similarity between the vector of the task (generated, for example, by the task context vectorization module 220 of FIG. 2) and the vector representing the user context (generated, for example, by the user context vectorization module 230 of FIG. 2) can be compared).
Both the combination of Birdi, Luus, and Sarkar and Mullins are directed to project management. The combination of Birdi, Luus, and Sarkar discloses identifying risk associated with a project by receiving data regarding a project and analyzing the data. Mullins improves upon the combination of Birdi, Luus, and Sarkar by disclosing determining user and task embeddings by receiving data regarding a project and assigning a project task by matching users to tasks. One of ordinary skill in the art would be motivated to further include determining user and task embeddings by receiving data regarding a project and assigning a project task by matching users to tasks, to efficiently correlate project data with risk data and task assignment data and to efficiently assign tasks based on project data.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of identifying risk associated with a project by receiving data regarding a project and analyzing the data in the combination of Birdi, Luus, and Sarkar to further utilize determining user and task embeddings by receiving data regarding a project and assigning a project task by matching users to tasks as disclosed in Mullins, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
Birdi teaches identifying risk associated with a project by receiving data regarding a project and analyzing the data but does not explicitly teach using a retrieval augmented generation pattern which is taught by the following citations from Tongaonkar:
retrieving, by a comparing engine using a retrieval augmented generation (RAG) pattern, data segments similar to the query from data related to the project (Paragraph Number [0162] teaches automatically generating a seed dataset for a domain specific language (DSL) (e.g., a resource query language (RQL), and wherein the RQL is generated for RQL for multi-domain security applications); expanding the seed dataset for the DSL using a Large Language Model (LLM); and validating the seed dataset for the DSL, wherein the seed dataset for the DSL is input to the LLM for fine tune training of the LLM (e.g., fine-tuned for a cloud security application). Paragraph Number [0163] teaches the fine-tune trained LLM can automatically generate an RQL query in response to a natural language query using the fine-tuned LLM. Paragraph Number [0174] teaches the disclosed techniques for grammar powered retrieval augmented generation for domain specific languages can be applied to significantly lower the error rate for fine-tuned LLMs for DSLs. For example, based on experiments applying the disclosed techniques for fine-tuned LLMs for DSLs, the error rate dropped from more than 60% for just the fine-tuned model to less than 10% for the grammar empowered retrieval augmented DSL generation implemented solution. Paragraph Number [0222] teaches a flow diagram for grammar powered retrieval augmented generation for domain specific languages in accordance with some embodiments. In some embodiments, a process as shown in FIG. 11 is performed using an automatically generated resource query language (RQL) dataset and a fine-tuned Large Language Model (LLM), and techniques).
Both the combination of Birdi, Luus, Sarkar, and Mullins and Tongaonkar are directed to project management. The combination of Birdi, Luus, Sarkar, and Mullins discloses identifying risk associated with a project by receiving data regarding a project and analyzing the data. Tongaonkar improves upon the combination of Birdi, Luus, Sarkar, and Mullins by disclosing using a retrieval augmented generation pattern. One of ordinary skill in the art would be motivated to further include using a retrieval augmented generation pattern, to efficiently significantly lower the error rate for fine-tuned LLMs for DSLs.
Accordingly, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the system and method of identifying risk associated with a project by receiving data regarding a project and analyzing the data in the combination of Birdi, Luus, Sarkar, and Mullins to further utilize a retrieval augmented generation pattern as disclosed in Tongaonkar, since the claimed invention is merely a combination of old elements, and in combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable.
As per claim 9, claim 9 recites a method that is substantially similar to the steps performed by the system found in claim 1 and is rejected for the same reasons put forth in regard to claim 1.
As per claim 17, Birdi teaches:
A non-transitory computer readable medium on which are stored instructions that, when executed, cause a programmable device to perform functions of: (Paragraph Number [0069] teaches a computing system 1100 that may be used to implement the present technology. System 1100 of FIG. 11 may be used to implement a computing device, server (i.e. application server, network server), and data repositories (i.e. knowledge base) in the context of the system of FIG. 2. The computing system 1100 of FIG. 11 includes one or more processors 1110 and memory 1120. Main memory 1120 stores, in part, instructions and data for execution by processor 1110. Main memory 1120 can store the executable code when in operation. Main memory 1120 may also include a repository such as the repository illustrated in FIG. 1. The system 1100 of FIG. 11 further includes a mass storage device 1130, portable storage medium drive(s) 1140, output devices 1150, user input devices 1160, a graphics display 1170, and peripheral devices 1180. (See also Paragraph Number [0062])).
The remainder of the claim limitations are substantially similar to those found in claim 1 and are rejected for the same reasons put forth in regard to claim 1.
As per claims 2, 11, and 20, the combination of Birdi, Luus, Sarkar, Mullins, and Tongaonkar teaches each of the limitations of claims 1, 9, and 17 respectively.
Birdi teaches identifying risk associated with a project by receiving data regarding a project and analyzing the data but does not explicitly teach determining user and task embeddings by receiving data regarding a project and assigning a project task by matching users to tasks which is taught by the following citations from Mullins:
generating user embeddings for one or more users associated with an enterprise, the user embeddings including information about at least one of tasks the one or more users are associated with or skillsets the one or more users have (Paragraph Number [0047] teaches the exemplary task context vectorization module 220 can employ one or more techniques to determine the context of tasks within a given collaborative project and otherwise aid in the determination of the meaning of the text, so that the text representation can be captured and compared. In one exemplary implementation, the exemplary task context vectorization module 220 leverages word embeddings that translate words into vectors. Words with similar meanings will have similar vectors, while unrelated words will have very different vectors. Each word in the task text can be converted to a vector (e.g., after stop words are removed) and the vectors can be averaged to create a vector representation for a given task. To compare similarity between questions, for example, another question will be embedded by the exemplary task context vectorization module 220 in the same (or substantially similar) way and a cosine similarity between resulting vectors will be computed, as discussed further below in conjunction with FIGS. 3A through 3C).
generating task embeddings for one or more tasks associated with the project (Paragraph Number [0047] teaches the exemplary task context vectorization module 220 can employ one or more techniques to determine the context of tasks within a given collaborative project and otherwise aid in the determination of the meaning of the text, so that the text representation can be captured and compared. In one exemplary implementation, the exemplary task context vectorization module 220 leverages word embeddings that translate words into vectors. Words with similar meanings will have similar vectors, while unrelated words will have very different vectors. Each word in the task text can be converted to a vector (e.g., after stop words are removed) and the vectors can be averaged to create a vector representation for a given task. To compare similarity between questions, for example, another question will be embedded by the exemplary task context vectorization module 220 in the same (or substantially similar) way and a cosine similarity between resulting vectors will be computed, as discussed further below in conjunction with FIGS. 3A through 3C).
comparing the task embeddings to the user embeddings to identify relevant users for the one or more tasks associated with the project (Paragraph Number [0057] teaches while one or more techniques described above leverage word embeddings, other techniques could be used as well to compare similarity between a task and a context of a user, such as knowledge or other capabilities. For example, as noted above, a similarity between a task and a context of a user can also (or alternatively) be performed using term frequency-inverse document frequency vectorization techniques, and/or a bag-of-words model. Once text is vectorized and made comparable, techniques such as text classification can also be leveraged. Text classification assigns a set of tags to text from a predefined set. Text classification applies tags to the text, which are initially human-generated. Thus, the matching of users to tasks based on shared tags can be explained (even if the selection of tags for each text is hard to explain)).
providing the identified relevant users for inclusion in the prompt (Paragraph Number [0057] teaches while one or more techniques described above leverage word embeddings, other techniques could be used as well to compare similarity between a task and a context of a user, such as knowledge or other capabilities. For example, as noted above, a similarity between a task and a context of a user can also (or alternatively) be performed using term frequency-inverse document frequency vectorization techniques, and/or a bag-of-words model. Once text is vectorized and made comparable, techniques such as text classification can also be leveraged. Text classification assigns a set of tags to text from a predefined set. Text classification applies tags to the text, which are initially human-generated. Thus, the matching of users to tasks based on shared tags can be explained (even if the selection of tags for each text is hard to explain)).
A person of ordinary skill would have been motivated to combine these references as described in regard to claim 1.
As per claims 3 and 12, the combination of Birdi, Luus, Sarkar, Mullins, and Tongaonkar teaches each of the limitations of claims 1 and 2, and 9 and 11 respectively.
Birdi teaches identifying risk associated with a project by receiving data regarding a project and analyzing the data but does not explicitly teach determining user and task embeddings by receiving data regarding a project and assigning a project task by matching users to tasks which is taught by the following citations from Mullins:
wherein the information about the one of the tasks the one or more users are associated with or skillsets the one or more users have is first segmented before the user embeddings are generated (Paragraph Number [0050] teaches one or more aspects of the disclosure recognize that tasks with high similarity in the vector space will be related to very similar tasks, so the users assigned to the first task can likely be assigned to the second task as well. With enough training examples of tasks to which a user is assigned, the exemplary user context vectorization module 230 can create clusters of user knowledge (or other skills and/or capabilities of the user) in the embedded space. These clusters could then be used directly by computing the embedding of a task and determining which user has the closest cluster in the embedded space and recommending that user (or set of users) complete the task. In addition, the user context vectorization module 230 can determine the context of one or more of the users from one or more clusters of similar users. (Examiner asserts that clustering is a form of segmentation)).
A person of ordinary skill would have been motivated to combine these references as described in regard to claim 1.
As per claims 4 and 13, the combination of Birdi, Luus, Sarkar, Mullins, and Tongaonkar teaches each of the limitations of claims 1 and 2, and 9 and 11 respectively.
Birdi teaches identifying risk associated with a project by receiving data regarding a project and analyzing the data but does not explicitly teach determining user and task embeddings by receiving data regarding a project and assigning a project task by matching users to tasks which is taught by the following citations from Mullins:
wherein the user query is converted to an embedding and used in comparing the task embeddings to the user embeddings (Paragraph Number [0057] teaches while one or more techniques described above leverage word embeddings, other techniques could be used as well to compare similarity between a task and a context of a user, such as knowledge or other capabilities. For example, as noted above, a similarity between a task and a context of a user can also (or alternatively) be performed using term frequency-inverse document frequency vectorization techniques, and/or a bag-of-words model. Once text is vectorized and made comparable, techniques such as text classification can also be leveraged. Text classification assigns a set of tags to text from a predefined set. Text classification applies tags to the text, which are initially human-generated. Thus, the matching of users to tasks based on shared tags can be explained (even if the selection of tags for each text is hard to explain)).
A person of ordinary skill would have been motivated to combine these references as described in regard to claim 1.
As per claims 5 and 14, the combination of Birdi, Luus, Sarkar, Mullins, and Tongaonkar teaches each of the limitations of claims 1 and 2, and 9 and 11 respectively.
Birdi teaches identifying risk associated with a project by receiving data regarding a project and analyzing the data but does not explicitly teach determining user and task embeddings by receiving data regarding a project and assigning a project task by matching users to tasks which is taught by the following citations from Mullins:
wherein at least one of the user embeddings or the task embeddings are stored in a vector database (Paragraph Number [0047] teaches the exemplary task context vectorization module 220 can employ one or more techniques to determine the context of tasks within a given collaborative project and otherwise aid in the determination of the meaning of the text, so that the text representation can be captured and compared. In one exemplary implementation, the exemplary task context vectorization module 220 leverages word embeddings that translate words into vectors. Words with similar meanings will have similar vectors, while unrelated words will have very different vectors. Each word in the task text can be converted to a vector (e.g., after stop words are removed) and the vectors can be averaged to create a vector representation for a given task. To compare similarity between questions, for example, another question will be embedded by the exemplary task context vectorization module 220 in the same (or substantially similar) way and a cosine similarity between resulting vectors will be computed, as discussed further below in conjunction with FIGS. 3A through 3C. Paragraph Number [0099] teaches it should also be understood that the disclosed collaborative project task assignment techniques, as described herein, can be implemented at least in part in the form of one or more software programs stored in memory and executed by a processor of a processing device such as a computer).
A person of ordinary skill would have been motivated to combine these references as described in regard to claim 1.
As per claims 6 and 15, the combination of Birdi, Luus, Sarkar, Mullins, and Tongaonkar teaches each of the limitations of claims 1 and 9 respectively.
Birdi teaches identifying risk associated with a project by receiving data regarding a project and analyzing the data but does not explicitly teach determining user and task embeddings by receiving data regarding a project and assigning a project task by matching users to tasks which is taught by the following citations from Mullins:
wherein the one or more recommended actions or the one or more revised recommended actions include recommending to assign a task associated with the project to a new user, the new user being a user with matching skills associated with users related to the project or to project requirements (Paragraph Number [0054] teaches the above-described approach for the automatic task assignment module 210 of FIG. 2 processes a training set of mappings from users to tasks as a learning phase for the machine learning models, for example. In some embodiments, techniques are also provided to address a new user who has never been assigned tasks. Paragraph Numbers [0095]-[0097] teach the particular processing operations and other functionality described in conjunction with the flow diagram of FIG. 5 are presented by way of illustrative example only, and should not be construed as limiting the scope of the disclosure in any way. Alternative embodiments can use other types of processing operations to automatically assign tasks of a collaborative project to users. For example, the ordering of the process steps may be varied in other embodiments, or certain steps may be omitted, or performed concurrently with one another rather than serially. In some aspects, additional actions can be performed. In one or more embodiments, techniques are provided to automatically assign users to tasks of a given collaborative project to improve correctness, turn-around time, and overall user experience for completing tasks and the overall collaborative project. In some embodiments, the disclosed techniques for automatically assigning tasks of a collaborative project to users allow an organization to assign users to tasks more effectively and in a more informed manner).
A person of ordinary skill would have been motivated to combine these references as described in regard to claim 1.
As per claims 7 and 16, the combination of Birdi, Luus, Sarkar, Mullins, and Tongaonkar teaches each of the limitations of claims 1 and 6, and 9 and 15 respectively.
In addition, Birdi teaches:
wherein the revised output includes capacity information for the new user (Paragraph Number [0057] teaches interface 500 may also include filter criteria 570. For example, the filter criteria of “Area” in filter criteria 570 allows user 120 to select which data should be displayed in the main grid 510. Paragraph Number [0058] teaches an interface for displaying details associated with a Standard Operating Procedure (SOP). Interface 600 shows details regarding the “Perimeter Intrusion” SOP shown in main frame 510. Selecting the “New” 610 button allows a blank screen to open so that user 120 may enter and/or save a new SOP. Moving from interface 500 to interface 600 is consistent with risk management system's 200 ability to link intangible SOPs with tangible parts, systems, facilities of the project that are being built, constructed, designed, or implemented. The SOP level information in interface 600 includes standards 620, regulations 630 and best practices 640 while the activity level information 650 (one SOP may have multiple activities) is related to the rest of the boxes in interface 600 for SFOR elements, systems, facilities/infrastructure, stakeholders. Thus, selecting a particular activity will generate one or more records to populate the boxes for SFOR elements, systems, facilities/infrastructure, stakeholders. Interface 600 allows user 120 to add one or more activities. For each activity, user 120 may enter relevant data related to SFOR elements, systems, facilities and stakeholders. Interface 600 allows for a work flow diagram that captures SOP information as it relates to systems, facilities, operations, stakeholders, etc. related to the project. Information entered into interface 600 may be stored in 210).
As per claim 8, the combination of Birdi, Luus, Sarkar, Mullins, and Tongaonkar teaches each of the limitations of claim 1.
In addition, Birdi teaches:
wherein the output or the revised output is provided for display in a dashboard for the project. (Paragraph Number [0057] teaches interface 500 may also include filter criteria 570. For example, the filter criteria of “Area” in filter criteria 570 allows user 120 to select which data should be displayed in the main grid 510. Paragraph Number [0058] teaches an interface for displaying details associated with a Standard Operating Procedure (SOP). Interface 600 shows details regarding the “Perimeter Intrusion” SOP shown in main frame 510. Selecting the “New” 610 button allows a blank screen to open so that user 120 may enter and/or save a new SOP. Moving from interface 500 to interface 600 is consistent with risk management system's 200 ability to link intangible SOPs with tangible parts, systems, facilities of the project that are being built, constructed, designed, or implemented. The SOP level information in interface 600 includes standards 620, regulations 630 and best practices 640 while the activity level information 650 (one SOP may have multiple activities) is related to the rest of the boxes in interface 600 for SFOR elements, systems, facilities/infrastructure, stakeholders. Thus, selecting a particular activity will generate one or more records to populate the boxes for SFOR elements, systems, facilities/infrastructure, stakeholders. Interface 600 allows user 120 to add one or more activities. For each activity, user 120 may enter relevant data related to SFOR elements, systems, facilities and stakeholders. Interface 600 allows for a work flow diagram that captures SOP information as it relates to systems, facilities, operations, stakeholders, etc. related to the project. Information entered into interface 600 may be stored in 210.).
As per claim 10, the combination of Birdi, Luus, Sarkar, Mullins, and Tongaonkar teaches each of the limitations of claim 9.
Birdi teaches identifying risk associated with a project by receiving data regarding a project and analyzing the data but does not explicitly teach inputting the data into an AI tool in the form of a prompt and iterating on the results which is taught by the following citations from Luus:
wherein generative AI tool is a large language model. (Paragraph Number [0026] teaches the selected generative model 126b is a generative AL model trained to generate content (e.g., textual, spreadsheet, chart, report, audio, image, video, and the like.) in response to natural language prompts input by a user via the native application 114 or via the web. The selected generative model 126b is implemented using a large language model (LLM) in some implementations. Examples of such models include but are not limited to a Generative Pre-trained Transformer 3 (GPT-3), or GPT-4 model. Other implementations may utilize other models or other generative models to set up a statistical test according to the presentation style/format of the user).
One of ordinary skill in the art would be motivated to combine these references as described in regard to claim 1.
As per claim 18 the combination of Birdi, Luus, Sarkar, Mullins, and Tongaonkar teaches each of the limitations of claim 17.
In addition, Birdi teaches:
wherein the request is received via a project management application or service (Paragraph Number [0024] teaches a risk management application for generating a risk management report. Customer data that is received from a user associated with a computing device includes one or more scope elements of a program or project. Paragraph Number [0026] teaches the risk management system interacts or communicates with a user or customer who is a stakeholder associated with the project in a unique manner that helps stakeholders understand project plans and designs and also helps communicate, manage and track project changes. (See also Paragraph Number [0052])).
As per claim 19, the combination of Birdi, Luus, Sarkar, Mullins, and Tongaonkar teaches each of the limitations of claim 17.
In addition, Birdi teaches:
wherein the request is received via a planner application or service (Paragraph Number [0053] teaches following use of the gap and risk mapping and analysis, a user may proceed to quality management 330C. Risk analysis tool 220 may also recommend a way to mitigate or address the identified gaps(s) and risk(s). The risk management application may also create a quality management plan which includes recommends how to a mitigation strategy or plan. Quality management tool 270 generates strategies for executing a mitigation strategy including identifying requisite tasks or activities, the relevant personnel or party to complete the tasks or activities, the timeline or deadlines associated with executing the strategy and budget requirements for the same).
Response to Arguments
Applicant’s arguments filed 2/17/2026 have been fully considered but they are not persuasive.
Applicant argues that the claims are eligible under 35 USC 101. (See Applicant’s Remarks, 2/17/2026, pgs. 8-11). Examiner respectfully disagrees. As noted in the 35 USC 101 analysis presented above, the claims recite an abstract concept that is encapsulated by decision making analogous to a method of organizing human activity. Examiner notes that each of the limitations that encapsulate the abstract concepts are recited in the above 35 USC 101. Additionally, the claims do not recite a practical application of the abstract concepts in that there is no specific use or application of the method steps other than to make conclusory determinations and provide for direction for either a person or machine to follow at some future time or to make calculations that are mathematical operations. The claims do not recite any particular use for these determinations and directions that improve upon the underlying computer technology (in this instance the computer software, processor, and memory). Instead, Examiner asserts that the additional elements in the claim language are only used as implementation of the abstract concepts utilizing technology. The concepts described in the limitations when taken both as a whole and individually are not meaningfully different than those found by the courts to be abstract ideas and are similarly considered to be certain methods of organizing human activity such as managing personal behavior or relationships or interactions between people, including social activities, teaching, and following rules or instructions or to make calculations that are mathematical operations. The steps are then encapsulated into a particular technological environment by executing these steps upon a computer processor and utilizing features such as a computer interface or sending and receiving data over a network or displaying information via a computerized graphical user interface. However, sending and receiving of information over a network and execution of algorithms on a computer are utilized only to facilitate the abstract concepts (i.e. selecting data on an interface, publishing/displaying information, etc.). As such, Examiner asserts that the implementation of the abstract concepts recited by the claims utilize computer technology in a way that is considered to be generally linking the use of the judicial exception to a particular technological environment or field of use (See MPEP 2106.05(h)). Accordingly, Examiner does not find that the claims recite a practical application of the abstract concepts recited by the claims.
Applicant argues that the previously cited reference does not teach the newly amended portions including the new limitations recited by the independent claims. (See Applicant’s Remarks, 2/17/2026, pgs. 12). Examiner respectfully disagrees. Examiner notes that new citations from the previously cited references and the new Tongaonkar reference have been applied to the newly presented claim limitations as indicated in the above in the new 35 USC 103 rejection. Examiner has added and emphasized specific portions of the Birdi, Luus, Sarkar, Mullins, and Tongaonkar references to read on the new independent claim language. As such, Applicant’s arguments directed towards the previous rejection are moot. In response to Applicant’s arguments, Examiner directs Applicant to review the new citations and explanations provided in the new 35 USC 103 rejection presented above.
Conclusion
Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW H DIVELBISS whose telephone number is (571)270-0166. The examiner can normally be reached on 7:30 am - 6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O'Connor can be reached on (571) 272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free).
/M. H. D./
Examiner, Art Unit 3624
/Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624