Prosecution Insights
Last updated: April 19, 2026
Application No. 18/223,685

MACHINE-ASSISTED PROCESS MODELING AND VALIDATION

Non-Final OA §103
Filed
Jul 19, 2023
Examiner
VOGT, JACOB BUI
Art Unit
2653
Tech Center
2600 — Communications
Assignee
SAP SE
OA Round
3 (Non-Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
2y 10m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
4 granted / 7 resolved
-4.9% vs TC avg
Strong +100% interview lift
Without
With
+100.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
33 currently pending
Career history
40
Total Applications
across all art units

Statute-Specific Performance

§101
35.1%
-4.9% vs TC avg
§103
43.8%
+3.8% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
10.6%
-29.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the Application filed on 01/21/2026. Claims 1-20 are pending and have been examined. Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/21/2026 has been entered. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments The reply filed on 01/21/2026 has been entered. Applicant’s arguments with respect to claims 1-20 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 101, Applicant has amended each of the independent claims and asserts that “the instant claims as amended are patent-eligible, and that the rejection under 35 U.S.C. § 101 be reconsidered and withdrawn.” The examiner agrees that these newly added limitations overcome the rejection under 35 U.S.C. 101. With respect to the applicant’s arguments to claim rejections under 35 U.S.C § 103, the applicant’s arguments with respect to claims 1, 6, 8, 13, and 15 have been considered but are moot in view of new ground(s) of rejection caused by the amendments. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6, 8, 13, and 15 are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 20210342723 A1 (Rao) in view of "Automatic Standard Compliance Assessment of BPMN 2.0 Process Models” (Geiger et al.) in view of US Patent Publication 20160335549 A1 (Ahuja-Cogny et al.). Claim 1 Regarding claim 1, Rao discloses a computer-implemented method for transforming a process document into model notation that is executable in a target computing system, comprising: receiving, by at least one processor, the process document (Rao ¶ [0068], "unstructured data 209 may be input into ML services 210.") describing one or more executable instructions for the target computing system (Rao ¶ [60060], "The discovery platform 110 may use the event and system logs, audit trails, input and output data from the execution of the intelligent automation applications 135 to generate new updated digital representations of the business process." ¶ [0093], "At step 805, the method includes determining automation candidates from the digital representation."), [wherein the one or more executable instructions are described in a user locale in a user locale]; generating the model notation in accordance with a model notation format (Rao ¶ [0031], "Discovery module 111 may use process mining techniques applied to application and user footprints, digital exhaust and event logs to discover and create a digital representation of the business process. Visualization module 112 may use the data generated by the discovery module 111 to create a visualization of the current business process as it happens in an enterprise.") by transforming the process document into the one or more executable instructions (Rao ¶ [0089]-[0093], "At step 801, the method includes receiving historical process data associated with one or more computer systems executing the process. … At step 803, the method includes generating an end-to-end process model for the process. … At step 804, the method includes generating a digital representation of the process based on the process model. … At step 805, the method includes determining automation candidates from the digital representation." Generating a process model that can be digitally automated is considered analogous to transforming a process document into executable instructions) with a deep learning technique (Rao ¶ [0035], "process mining algorithms may include petri nets, process trees, casual nets, state machines, business process model and notation (BPMN) models, declarative models, deep belief networks, Baysian belief networks or other machine learning algorithms."), the generating comprising: parsing the process document into tokenized structured data (Rao ¶ [0089], "the method includes receiving historical process data associated with one or more computer systems executing the process. The historical process data may be unstructured documents or images." ¶ [0068], "NLP 214 may detect and understand the natural language portions of the unstructured data. NLP 214 may use computer vision and OCR to parse the document before being analyzed to identify entities. ... Structured database 215 may then store the extracted information from the processing of the unstructured data in a structured manner." ¶ [0039], "NLP module 121 may further perform syntax operations such as grammar induction, lemmatization, morphological segmentation, part-of-speech tagging, parsing, sentence boundary disambiguation, stemming, word segmentation and terminology extraction." Historical process data after NLP processing is considered analogous to tokenized structured data); determining, based on the tokenized structured data and using natural language processing capabilities from the deep learning technique, the model notation format to model the process document (Rao ¶ [0090], "At step 802, the method includes applying process mining techniques to the historical process data. The process mining techniques may include one or more machine learning algorithms designed to identify business processes and their workflows. These machine learning algorithms may include petri nets, process trees, casual nets, state machines, BPMN models, declarative models, deep belief networks, Baysian belief networks or other machine learning models." Process mining techniques are considered analogous to NLP capabilities from deep learning techniques since they are used to identify business processes and their workflows from tokenized structured data); detecting [that] deployment information within the process document (Rao ¶ [0037], "The AI & ML core technology platform 120 may use machine learning and AI algorithms ... to ... validate data related to the process being automated") [includes the one or more executable instructions [by performing a syntactic check on the tokenized structured data using a validator corresponding to the determined model notation format, wherein the syntactic check indicates whether the deployment information is free of computational errors by determining, using the validator, whether the tokenized structured data specifies each of the one or more executable instructions of the process document]; and deploying, using the deployment information, the model notation in the target computing system associated with the determined model notation format by storing the one or more executable instructions to a memory of the target computing system (Rao ¶ [0068], "Multiple machine learning algorithms may be used individually or in combination to perform the extractions in the entity extractor 211, raw data extractor 212, table processor 213 and NLP 214. Structured database 215 may then store the extracted information from the processing of the unstructured data in a structured manner."), wherein the one or more executable instructions are executed by the target computing system upon the deploying (Rao ¶ [0093]-[0094], "At step 805, the method includes determining automation candidates from the digital representation. ... At step 806, the method includes identifying automation tools for the automation candidates. Each activity, task or event in the business process may be automatable in different ways. The system may determine which machine learning tool would be best to train and execute in the automation of the business process." Automating a business process is considered analogous to deploying a model notation. Activities, tasks, or events comprised in the business process is considered analogous to deployment information). Rao does not explicitly disclose all of validating deployment information via a syntactical check. However, Geiger et al. disclose detecting that deployment information within the process document includes the one or more executable instructions by performing a syntactic check on the tokenized structured data using a validator corresponding to the determined model notation format (Geiger et al. pg. 7, Section 3, Paragraph 2, "The basic validation workflow is visualized in Fig. 2: After parsing the command line options, the BPMN file, and potentially needed further artifacts (as part of the input package, see Fig. 1) are transformed into an internal BPMNProcess representation. During this step the input files are also checked for schema validity." Checking input files for schema validity is considered analogous to detecting whether process documents include executable instructions), wherein the syntactic check indicates whether the deployment information is free of computational errors by determining, using the validator, whether the tokenized structured data specifies each of the one or more executable instructions of the process document (Geiger et al. pg. 7, Section 3, Paragraph 2, "input files are also checked for schema validity. Next, the referential integrity is checked, i.e., do all referenced elements exist and do they have an allowed type. In the final step, the so called EXT constraints are evaluated. ... [EXT constraints] are defined in a declarative way using Schematron [11], which is an ISO standard dedicated to define and check constraints for XML-based documents. If a violation of a constraint is found, it is added to the ValidationResult which is the output of the tool (see Fig. 1) and basis for the reports created after the execution of all validation steps." Schema validation, referential integrity validation, or EXT constraint validation, whether taken individually or in combination, is considered analogous to determining whether tokenized structured data specifies each of the executable instructions). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Rao’s model notation deployment system to incorporate Geiger et al.’s syntactic checks. The suggestion/motivation for doing so would have been that, “there is an increasing need for valid, standard compliant process models and model serializations, as these are a prerequisite for sensible results, especially for process simulation and execution,” as noted by Geiger et al. in pg. 1, Section 1, Paragraph 1. Rao in view of Geiger et al. does not explicitly disclose processing documents in a user locale. However, Ahuja-Cogny et al. disclose receiving, by at least one processor, the process document (Ahuja-Cogny et al. ¶ [0048], "Starting at step 205, one or more documents are received which contain knowledge information. These documents may be presented in various formats generally known in the art including, and without limitation, Microsoft Office™ (e.g., .doc) formats, html, rich text formats, etc.") describing one or more executable instructions for the target computing system (Ahuja-Cogny et al. ¶ [0049], "In general, the document may be structured to follow domain-specific thought processes and their logic based step-by-step approach."), wherein the one or more executable instructions are described in a user locale (Ahuja-Cogny et al. ¶ [0062], "The technology described herein provides techniques for moving from a document or other source of knowledge to an application without any technology requirement from any of the users (knowledge user or end user). This may be done in a seamless transition from the business or domain-specific environment and its natural medium (e.g., English) to the technology environment and back to the business or domain environment and its natural medium (e.g., English)."); generating the model notation in accordance with a model notation format (Ahuja-Cogny et al. ¶ [0052], "If the BPMN option is selected, the KPML generated at step 215 is translated at step 220 into BPMN standard business sub-processes." Generated BPMN is considered analogous to claimed model notation. See Fig. 3D-3G for examples of generated BPMN.) by transforming the process document into the one or more executable instructions (Ahuja-Cogny et al. ¶ [0049], "at step 210, the knowledge information contained in the one or more documents is segregated and organized in information elements using automated tagging and extraction techniques coded for the purpose and based on a type of AI for knowledge processes.")with a [deep] learning technique (Ahuja-Cogny et al. ¶ [0050], "machine learning rules can be learned or updated to aid in the processing for additional documents and natural language processing can be used for refining the result."), the generating comprising: generating the model notation by processing the tokenized structured data with the [deep] learning technique (Ahuja-Cogny et al. ¶ [0050], "machine learning rules can be learned or updated to aid in the processing for additional documents and natural language processing can be used for refining the result.") in accordance with the determined model notation and the user locale (Ahuja-Cogny et al. ¶ [0062], "The technology described herein provides techniques for moving from a document or other source of knowledge to an application without any technology requirement from any of the users (knowledge user or end user). This may be done in a seamless transition from the business or domain-specific environment and its natural medium (e.g., English) to the technology environment and back to the business or domain environment and its natural medium (e.g., English)."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Rao’s model notation generation system to include Ahuja-Cogny et al.’s user locale support because such a modification is the result of combining prior art elements according to known methods to yield predictable results. More specifically, Rao’s NLP techniques as modified by Ahuja-Cogny et al.’s support for user locale can yield a predictable result of improving usability since supporting multiple locales to account for different users would make the invention more user-friendly to those who do not speak a certain locale. Thus, a person of ordinary skill would have appreciated including in Rao’s model notation generation system the ability to do Ahuja-Cogny et al.’s user locale support since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Claim 6 Regarding claim 6, the rejection of claim 1 is incorporated. Rao in view of Geiger et al. in view of Ahuja-Cogny et al. disclose all the elements of the claimed invention as stated above. Geiger et al. further disclose detecting that the model notation contains an error with respect to the model notation format (Geiger et al. pg. 7, Section 3, Paragraph 2, "After parsing the command line options, the BPMN file, and potentially needed further artifacts ... are transformed into an internal BPMNProcess representation. During this step the input files are also checked for schema validity." Checking for schema validity is considered analogous to detecting an error with respect to model notation format); and in response to the detecting, outputting a notification indicating that the model notation contains the error (Geiger et al. pg. 7, Section 3, Paragraph 2, "If a violation of a constraint is found, it is added to the ValidationResult which is the output of the tool (see Fig. 1) and basis for the reports created after the execution of all validation steps."). Claim 8 Regarding claim 8, Rao discloses a system for generating a model notation, comprising: a memory; and at least one processor coupled to the memory (Rao ¶ [0026], "A computer system may include a processor, a memory, and a non-transitory computer-readable medium. The memory and non-transitory medium may store instructions for performing methods and steps described herein."). The remaining limitations of claim 8 are similar in scope to the limitations of claim 1 and therefore are rejected for similar reasons as described above. Claim 13 Regarding claim 13, the rejection of claim 8 is incorporated. The limitations of claim 13 are similar in scope to that of claim 6 and therefore are rejected for similar reasons as described above. Claim 15 Regarding claim 15, Rao discloses a non-transitory computer-readable medium having instructions stored thereon that, when executed by at least one computing device, cause the at least one computing device to perform operations (Rao ¶ [0026], "A computer system may include a processor, a memory, and a non-transitory computer-readable medium. The memory and non-transitory medium may store instructions for performing methods and steps described herein."). The remaining limitations of claim 15 are similar in scope to the limitations of claim 1 and therefore are rejected for similar reasons as described above. Claims 2-4, 9-11, and 16-18 are rejected under 35 U.S.C. 103 as obvious over Rao in view of Geiger et al. in view of Ahuja-Cogny et al. as applied to claims 1, 8, and 15 above, and further in view of "Generating Natural Language Texts from Business Process Models" (Leopold et al.) in view of "Large Language Models for Business Process Management: Opportunities and Challenges" (Vidgof et al.). Claim 2 Regarding claim 2, the rejection of claim 1 is incorporated. Rao in view of Geiger et al. in view of Ahuja-Cogny et al. disclose all the elements of the claimed invention as stated above. Rao in view of Geiger et al. in view of Ahuja-Cogny et al. do not explicitly disclose all of converting a model notation into a natural language document. However, Leopold et al. disclose converting the model notation to a generated document in the user locale (Leopold et al. pg. 103, Introduction, Paragraph 1, "To this end, we define an approach which automatically transforms BPMN process models into understandable natural language texts." ; pg. 135, Section 5.4, Paragraph 1, "Although the general concept of the text generation technique is not language-specific, the adaptation to languages other than English requires the replacement of three resources: the parsing and annotation component, the predefined DSynT templates, and the surface realizer.") by generating a predetermined string based on a parameterized template (Leopold et al. pg. 118, Section 5.2.5.4, Paragraph 1-3, "We previously defined a bond as a set of RPST fragments sharing two common nodes. In addition, we pointed out that this applies to block-structured splits and joins. ... The essential idea for transforming bonds to text is to complement the sentences that are generated for activities with additional explanations. Such an explanation sentence may either stand separately or may be incorporated into an activity sentence. Table 5.2 provides an overview of the basic sentence templates for each bond type." See Table 5.2, which maps sentence templates to certain structures in process model notation), wherein the predetermined string corresponds to an object included in the model notation (Leopold et al. pg. 118, Section 5.2.5.4, Paragraph 1-3; see Table 5.2, which maps sentence templates to certain structures in process model notation. These structures are considered analogous to the claimed object included in model notation.). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Rao in view of Geiger et al. in view of Ahuja-Cogny et al. to incorporate converting model notation into natural language as taught by Leopold et al. The suggestion/motivation for doing so would have been that, “Process models are widely used for documenting and redesigning the operations of companies. The audience of these models ranges from well-trained system analysts and developers to casual staff members who are unexperienced in terms of modeling. Hence, most of the latter lack the ability to understand process model in detail,” as noted by Leopold et al. in the abstract. Rao in view of Geiger et al. in view of Ahuja-Cogny et al. in view of Leopold et al. do not explicitly disclose all of the deep learning technique being an LLM. However, Vidgof et al. disclose wherein the deep learning technique is a large language model (LLM) (Vidgof et al. pg. 6, Section 3.1, Paragraph 2, "The idea is to give LLM all relevant documentation existing in the organization as input. This can include legal documents, job descriptions, advertisements, internal knowledge bases and handbooks. The LLM is then tasked to identify which processes are taking place in the organization. It can be further instructed to classify the input documents according to processes they describe."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Rao’s model notation generation system to incorporate Vidgof et al.’s LLMs. The suggestion/motivation for doing so would have been that, “The BPM lifecycle starts from Identification. Normally, at this stage there is not much structured process knowledge available in the company, and relevant information has to be extracted from heterogeneous internal documentation. This is exactly where LLM shine as they can quickly scan and summarize large volumes of text, highlighting important documents or directly outputting required information,” as noted by Vidgof et al. in pg. 6, Section 3.1, Paragraph 1. Claim 3 Regarding claim 3, the rejection of claim 2 is incorporated. Rao in view of Geiger et al. in view of Ahuja-Cogny et al. in view of Leopold et al. in view of Vidgof et al. disclose all the elements of the claimed invention as stated above. Leopold et al. further disclose comparing an envisioned process document describing a process in the user locale with the generated document (Leopold et al. pg. 126, Section 5.3, Paragraph 1, "To demonstrate the capability of the presented technique for generating natural language texts from process models, we conduct an evaluation with real-world data. The overall goal of the evaluation is to learn how the generated texts compare to textual descriptions that were created by humans." Textual descriptions created by humans are considered analogous to an envisioned process document) and outputting a difference between the envisioned process document and the generated document (Leopold et al. pg. 131, Section 5.3.4, Paragraph 3; "Table 5.5 summarizes the overall evaluation results for the structural dimension. ... the complexity metrics clauses per sentence ratio (C/S), t-units per sentence ratio (T/S), and complex t-units per sentence ratio (CT/S) indicate that the generated sentences are less complex with regard to the syntactic dimension. Particularly the number of clauses per complex t-units is, on average, much smaller for the generated sentences."). Claim 4 Regarding claim 4, the rejection of claim 3 is incorporated. Rao in view of Geiger et al. in view of Ahuja-Cogny et al. in view of Leopold et al. in view of Vidgof et al. disclose all the elements of the claimed invention as stated above. Leopold et al. further disclose extracting contexts from the envisioned process document and the generated document (Leopold et al. pg. 127, Section 5.3.1, Paragraph 3, "The text content dimension refers to the extent the text reflects the semantics of the model. In general, we identified that the sentences of the generated as well as the manually created texts can be subdivided into three types: sentences describing the model content (i.e., the nodes of the model), sentences solely discussing the control flow, and sentences providing additional context information that is not captured by the model. Hence, we operationalize the text content dimension using the following metrics ... ."); and comparing the contexts to determine the difference (Leopold et al. pg. 126, Section 5.3.1, Paragraph 1, "In the context of the evaluation, we aim at learning how the generated texts compare to texts that were created by humans. In particular, we consider two dimensions: text structure and text content."). Claim 9 Regarding claim 9, the rejection of claim 8 is incorporated. The limitations of claim 9 are similar in scope to that of claim 2 and therefore are rejected for similar reasons as described above. Claim 10 Regarding claim 10, the rejection of claim 9 is incorporated. The limitations of claim 10 are similar in scope to that of claim 3 and therefore are rejected for similar reasons as described above. Claim 11 Regarding claim 11, the rejection of claim 10 is incorporated. The limitations of claim 11 are similar in scope to that of claim 4 and therefore are rejected for similar reasons as described above. Claim 16 Regarding claim 16, the rejection of claim 15 is incorporated. The limitations of claim 16 are similar in scope to that of claim 2 and therefore are rejected for similar reasons as described above. Claim 17 Regarding claim 17, the rejection of claim 16 is incorporated. The limitations of claim 17 are similar in scope to that of claim 3 and therefore are rejected for similar reasons as described above. Claim 18 Regarding claim 18, the rejection of claim 17 is incorporated. The limitations of claim 18 are similar in scope to that of claim 4 and therefore are rejected for similar reasons as described above. Claims 5, 12, and 19 are rejected under 35 U.S.C. 103 as obvious over Rao in view of Geiger et al. in view of Ahuja-Cogny et al. as applied to claims 1, 8, and 15 above, and further in view of US Patent Publication 20210089620 A1 (Finkelshtein et al.). Claim 5 Regarding claim 5, the rejection of claim 1 is incorporated. Rao in view of Geiger et al. in view of Ahuja-Cogny et al. disclose all the elements of the claimed invention as stated above. Rao in view of Geiger et al. in view of Ahuja-Cogny et al. do not explicitly disclose all of the detection of personal information. However, Finkelshtein et al. disclose detecting, using the deep learning technique, that the process document includes a usage of personal information (Finkelshtein et al. ¶ [0026], "First, a named-entity recognition (NER) algorithm may be applied to a digital text document which is suspected to be inclusive of personal information, to detect named entities appearing in the document."; ¶ [0051]-[0052], "The NER algorithm may be one which uses linguistic grammar-based techniques and/or statistical models (e.g., machine learning). Prominent NER algorithms which may be used for step 206 are those included, for example, in the following software packages: Natural Language Understanding by IBM Corporation; spaCy by ExplosionAI GmbH, Germany; and Natural Language Toolkit (NLTK)" Natural Language Understanding by IBM Corporation uses deep learning, and is therefore considered analogous to the claimed deep learning technique); and in response to detecting usage of personal information, outputting a notification indicating that that the process document includes the usage of the personal information (Finkelshtein et al. ¶ [0073], "In a step 212, a notification of a result of the estimation may be issued. This may include, for example, displaying the per-person privacy scores and/or the per-document privacy score on a computer display, transmitting the scores in an electronic message, adding the scores as attributes to document 202 (namely, to the digital file storing the document), recording the scores in a database that stores document and privacy score data, etc."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Rao in view of Geiger et al. in view of Ahuja-Cogny et al. to incorporate detection and notification of personal information as taught by Finkelshtein et al. The suggestion/motivation for doing so would be, “the growth in security attacks on sensitive data,” as noted by the Finkelshtein et al. disclosure in paragraph [0002]. Claim 12 Regarding claim 12, the rejection of claim 8 is incorporated. The limitations of claim 12 are similar in scope to the limitations of claim 5 and therefore are rejected for similar reasons as described above. Claim 19 Regarding claim 19, the rejection of claim 15 is incorporated. The limitations of claim 19 are similar in scope to the limitations of claim 5 and therefore are rejected for similar reasons as described above. Claims 7, 14, and 20 are rejected under 35 U.S.C. 103 as obvious over Rao in view of Geiger et al. in view of Ahuja-Cogny et al. as applied to claims 1, 8, and 15 above, and further in view of the publication "Large Language Models for Business Process Management: Opportunities and Challenges" (Vidgof et al.). Claim 7 Regarding claim 7, the rejection of claim 1 is incorporated. Rao in view of Geiger et al. in view of Ahuja-Cogny et al. disclose all the elements of the claimed invention as stated above. Rao further discloses generating the model notation in accordance with the [determined] model notation format by processing the process document with the deep learning technique (Rao ¶ [0090], "At step 802, the method includes applying process mining techniques to the historical process data. The process mining techniques may include one or more machine learning algorithms designed to identify business processes and their workflows. These machine learning algorithms may include petri nets, process trees, casual nets, state machines, BPMN models, declarative models, deep belief networks, Baysian belief networks or other machine learning models.") [based on a prompt for modeling the process document]. Ahuja-Cogny et al. further disclose a prompt for modeling the process document (Ahuja-Cogny et al. ¶ [0052], "at step 217, a selection is made between generating BPMN (or other business process modeling languages) or KPEL based on KPML. This selection may be performed, for example, by the user at runtime or using a pre-configured setting."). Rao in view of Geiger et al. in view of Ahuja-Cogny et al. do not explicitly disclose all of determining a specific model notation format from a process document. However, Vidgof et al. disclose determining, using the deep learning technique, the model notation format from at least one of business process modeling notation (BPMN), case management model and notation (CMMN), and decision model and notation (DMN) to model the process document based on context of the process document (Vidgof et al. pg. 2, Section 1, Paragraph 4, "Section 2 discusses the essential concepts of Deep Learning (DL) and LLM in relation to Business Process Management (BPM) practises. In Section 3 we identify and discuss LLM applications within BPM and along the different BPM lifecycle phases."; pg. 6, Section 3.1, Paragraph 2, "The idea is to give LLM all relevant documentation existing in the organization as input. This can include legal documents, job descriptions, advertisements, internal knowledge bases and handbooks. The LLM is then tasked to identify which processes are taking place in the organization. It can be further instructed to classify the input documents according to processes they describe."). It would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify Rao’s model notation generation system to incorporate the determination of model notation as taught by Vidgof et al. The suggestion/motivation for doing so is similar to the suggestion/motivation described above with respect to claim 2. Claim 14 Regarding claim 14, the rejection of claim 8 is incorporated. The limitations of claim 14 are similar in scope to the limitations of claim 7 and therefore are rejected for similar reasons as described above. Claim 20 Regarding claim 20, the rejection of claim 15 is incorporated. The limitations of claim 20 are similar in scope to the limitations of claim 7 and therefore are rejected for similar reasons as described above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JACOB B VOGT whose telephone number is (571)272-7028. The examiner can normally be reached Monday - Friday 9:30am - 7pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Paras D Shah can be reached at (571)270-1650. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACOB B VOGT/ Examiner, Art Unit 2653 /Paras D Shah/ Supervisory Patent Examiner, Art Unit 2653 03/09/2026
Read full office action

Prosecution Timeline

Jul 19, 2023
Application Filed
Jun 17, 2025
Non-Final Rejection — §103
Aug 21, 2025
Interview Requested
Aug 26, 2025
Applicant Interview (Telephonic)
Aug 26, 2025
Examiner Interview Summary
Sep 08, 2025
Response Filed
Oct 27, 2025
Final Rejection — §103
Jan 13, 2026
Examiner Interview Summary
Jan 13, 2026
Applicant Interview (Telephonic)
Jan 21, 2026
Request for Continued Examination
Jan 28, 2026
Response after Non-Final Action
Mar 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12505279
METHOD AND SYSTEM FOR DOMAIN ADAPTATION OF SOCIAL MEDIA TEXT USING LEXICAL DATA TRANSFORMATIONS
2y 5m to grant Granted Dec 23, 2025
Study what changed to get past this examiner. Based on 1 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+100.0%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month