Prosecution Insights
Last updated: April 19, 2026
Application No. 18/653,981

SYSTEMS AND METHODS TO GENERATE TRAINING MODULES FOR A FACILITY

Final Rejection §103
Filed
May 03, 2024
Examiner
WARNER, PHILIP N
Art Unit
3624
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Honeywell International Inc.
OA Round
2 (Final)
36%
Grant Probability
At Risk
3-4
OA Rounds
3y 7m
To Grant
65%
With Interview

Examiner Intelligence

Grants only 36% of cases
36%
Career Allow Rate
39 granted / 107 resolved
-15.6% vs TC avg
Strong +29% interview lift
Without
With
+28.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
28 currently pending
Career history
135
Total Applications
across all art units

Statute-Specific Performance

§101
31.8%
-8.2% vs TC avg
§103
53.8%
+13.8% vs TC avg
§102
9.5%
-30.5% vs TC avg
§112
4.9%
-35.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 107 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following FINAL Office Action is in response to Applicant’s communication filed 11/21/2025 regarding Application 18/653,981. Status of Claim(s) Claim(s) 1-20 is/are currently pending and are rejected as follows. Response to Arguments – 101 Rejection Applicant’s arguments and amendments in regards to the previously applied 101 rejection have been fully considered and deemed persuasive. Examiner therefore withdraws the previously applied 101 rejection. Response to Arguments – 103 Rejection Applicant’s arguments in regards to the previously applied 103 rejection are rendered moot in view of the amended prior art rejection below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Deolalikar (US 2014/0164297 Al) in view of Bass (US 2003/0009742 Al) and Marom (US 2025/0209256 A1) Claim(s) 1, 9, and 17 – Deolalikar discloses the following: A non-transitory, computer-readable storage medium having stored thereon executable instructions (Deolalikar: Paragraph 44, "The data storage device (104) may include various types of memory modules, including volatile and nonvolatile memory. For example, the data storage device (104) of the present example includes Random Access Memory (RAM) (131), Read Only Memory (ROM) (132), and Hard Disk Drive (HDD) memory (133). Many other types of memory may also be utilized, and the present specification contemplates the use of many varying type(s) of memory in the data storage device (104) as may suit a particular application of the principles described herein. In certain examples, different types of memory in the data storage device ( 104) may be used for different data storage needs. For example, in certain examples the processor (102) may boot from Read Only Memory (ROM) (132), maintain nonvolatile storage in the Hard Disk Drive (HDD) memory (133), and execute program code stored in Random Access Memory (RAM) (131).") A processor (Deolalikar: Paragraph 46, "The hardware adapters (106) in the machine learning classifying device (100) enable the processor (102) to interface with various other hardware elements, external and internal to the machine learning classifying device (100). For example, peripheral device adapters (106) may provide an interface to input/output devices, such as, for example, display device (110) or access other external devices (112). The display device (110) may be provided to allow a user to interact with and implement the functionality of the machine learning classifying device (100).") a memory communicatively coupled to the processor (Deolalikar: Paragraph 44, "The data storage device (104) may include various types of memory modules, including volatile and nonvolatile memory. For example, the data storage device (104) of the present example includes Random Access Memory (RAM) ( 131 ), Read Only Memory (ROM) (132), and Hard Disk Drive (HDD) memory (133). Many other types of memory may also be utilized, and the present specification contemplates the use of many varying type(s) of memory in the data storage device (104) as may suit a particular application of the principles described herein. In certain examples, different types of memory in the data storage device (104) may be used for different data storage needs. For example, in certain examples the processor (102) may boot from Read Only Memory (ROM) (132), maintain nonvolatile storage in the Hard Disk Drive (HDD) memory (133), and execute program code stored in Random Access Memory (RAM) (131).") creating by one or more processors one or more documents related to one or more operations in a facility; (Deolalikar: Paragraph 32, "As used in the present specification and in the appended claims, the terms "classifier," "machine learning classifier," "machine learning classifying device," or similar language is meant to be understood broadly as any hardware device or a combination of hardware devices and software that classifies a number of textual documents by topic or category. In the present specification, the machine learning classifying device further utilizes pseudo-documents created from a number of original documents to learn how to categorize a number of test documents, as will be described in more detail below."; Paragraph 33, "Still further, as used in the present specification and in the appended claims, the term "distribution of words" or similar language is meant to be understood broadly as the frequency of occurrence of individual words, phrases, or a combination thereof that appear within a document. Thus, as mentioned above, the distribution of words in an original document may comprise the number of times each individual word appears within the original document. This distribution of words is used to create a number of pseudo-documents."; Paragraph 42, "The processor (102) may include the hardware architecture to retrieve executable code from the data storage device (104) and execute the executable code. The executable code may, when executed by the processor (102), cause the processor (102) to implement at least the functionality of receiving a number of original documents, deriving a number of pseudo­documents from the original documents, utilizing the derived pseudo-documents to learn how to classify a test document into a category, and classify a number of test documents based on the learning, according to the methods of the present specification described herein. In the course of executing code, the processor (202) may receive input from and provide output to a number of the remaining hardware units.") receiving, via a user interface, a first input from a user in the facility, wherein the first input is associated with the one or more documents; (Deolalikar: Paragraph 46, "The hardware adapters (106) in the machine learning classifying device (100) enable the processor (102) to interface with various other hardware elements, external and internal to the machine learning classifying device (100). For example, peripheral device adapters (106) may provide an interface to input/output devices, such as, for example, display device (110) or access other external devices (112). The display device (110) may be provided to allow a user to interact with and implement the functionality of the machine learning classifying device (100)."; Paragraph 49, "The machine learning classifying device (100) may comprise a sampling module (140) to, when executed by the processor (102), receive a number of original documents, determine a distribution of words within the original documents, and store those original documents in an original document database (142). In one example, the sampling module (140) is stored within the data storage device ( 104) of the machine learning classifying device (100), and is accessible and executable by the processor (102)."; Paragraph 77, "FIG. 3 is a flow chart (300) showing a method of generating training documents for training a classifying device (100), according to another example of the principles described herein. The method of FIG. 3 may begin with the processor (102) of the machine learning classifying device (100) storing (block 205) a number of original documents. The original documents may be stored in the original document database (142) of the data storage device (104). The original documents may be provided to the machine learning classifying device ( 100) from a user or administrator that is seeking to teach the machine learning classifying device (100). In another example, the original documents may be searched for and obtained by the machine learning classifying device (100) autonomous! y. ") automatically extracting by one or more processors…one or more topics from the one or more documents based at least on the first input…; (Deolalikar: Paragraph 11, "The present systems and methods provide for the classification of textual documents. In some situations, a relatively limited number of test documents are available to a computing system to train the computing system to classify the textual documents. Without a sufficient number of training documents, the classification system may not be able to correctly classify documents as being relevant to, for example, a specific topic."; Paragraph 32, "As used in the present specification and in the appended claims, the terms "classifier," "machine learning classifier," "machine learning classifying device," or similar language is meant to be understood broadly as any hardware device or a combination of hardware devices and software that classifies a number of textual documents by topic or category. In the present specification, the machine learning classifying device further utilizes pseudo-documents created from a number of original documents to learn how to categorize a number of test documents, as will be described in more detail below."; Paragraph 36, "Throughout the below description, an example of classifying a number of documents in a news reporting scenario is described in which a number of people such as reporters prepare a number of textual documents. After being produced, these textual documents are classified by a machine learning classifying device in order to obtain a number of cataloged textual documents arranged by topics such as, for example, economy, sports, and politics, among many other topics.") updating by one or more processors a database with the one or more topics extracted from the one or more documents; (Deolalikar: Paragraph 22, "Of the many classifiers that have been proposed for the task of text classification, a "baseline" may be the naive Bayes probabilistic classifier. The naive Bayes classifier has several advantages that make it attractive for enterprise applications. It is easy to implement, and can be trained fast. But an aspect of naive Bayes that makes it attractive for enterprise applications is that it is transparent, and can be used for diagnostics. The user can easily understand the classifier, and can therefore troubleshoot it easily. In comparison, a solved model of a SVM is often hard to interpret. This difference is especially important in situations where the data is periodically being updated, the classes are still changing, or the system is still under construction.") Deolalikar does not explicitly disclose generating a training template specifically for a topic or a training module, however, in analogous art of generating documentation, Bass teaches the following: generating by the one or more processors…one or more training templates for the one or more topics; (Bass: Paragraph 15, "The automated job training and performance tool is a suite of computer software applications for enabling an organization to develop a program for the instruction and training of members of the organization. The tool enables those charged with developing instruction and training to develop a web-based training course without having any formal acquaintance with computer programming languages, either individually or jointly in synchronous or asynchronous modes. The suite includes a guidelines application describing the procedures for developing a job training program, a design application which uses analysis and design template to guide the user in course development, and a Web Author application for automating the process of generating an HTML document implementing the course. The three applications may be used individually, but are seamlessly integrated through object-oriented programming techniques so that each application may access the other, and so that data entered in the templates and forms is carried over to the Web Author application."; Paragraph 22, "It is another object of the invention to provide an automated job training and performance tool which utilizes templates and forms for ease of operation in developing an organization's training program."; Paragraph 61, "The idEa tool 38 is broadly divided into the guidelines 40 and templates 42, as shown in FIG. 2. The guidelines is a rich knowledge base based on the Instructional Systems Design (ISD) Model. The browser-based guidelines 40 provide the organization with principles, a tutorial, and guidelines for designing and developing instructionally sound training programs. Structurally the guidelines include content display, navigation means, a glossary, help including the tutorial, a notepad and bookmark tool, all deriving their content from a content database via a data processor. The idEa templates 42 are Java-based and allow users to complete analysis and design tasks and activities online. The templates 42 are either downloaded from the web server 28 or accessed through a browser using the Java Web Start plug-in so that the organization may input information to design their job training program. The templates 42--and their contents--are structured as objects so that course designers/developers and subject matter experts can reuse them. The templates 42 behave like wizards to guide the user in completing the template 42. A wizard is an interactive utility that guides a user through a process step by step. Templates are presented to users for their input of ata specific to a task or activity. Pop-up windows appear at certain places to offer suggestions, tips, and the opportunity to seek help. Each template has a toolbar offering users several functions, e.g., file options, help function, etc. Users may save templates in a file, to their desktop, to their LAN, to disk, to export to HTML, etc. Users can reuse templates. A data processor 52 performs one or more of the following processes, depending on the particular task represented by the template, using a rule-based processing engine: (1) compiles the information; (2) weights the information based on a rule-based process; (3) calculates based on a rule-based process; and (4) filters/sorts the information based on a rule-based process. Once the processing is complete, the processor 52 outputs recommendations as process objects. The objects can be different forms depending on their intent and the type of business. The templates themselves are objects, as well as the fields and the information contained in the fields. Depending on the template and its purpose, the template references needed objects and displays them in a structured format, outputting desired information as well as allowing users to insert or change information, as shown below in FIGS. 14A-14D and FIGS. 15A-15I. It will be noted that users may begin with the guidelines 40 for advice and tutorial assistance, or they can go directly to the templates 42 to complete the work, accessing the guidelines 40 as needed through the Help function. The templates 42 correspond to the first two phases of the ISD process: (1) analysis; and (2) design.") receiving, via the user interface, a second input from the user, wherein the second input comprises one or more specifications; and (Bass: Paragraph 70, "FIGS. 9A, 9B, 9C, 9D, 9E, 9F, 9G, 9H and 91 combine to form a flow chart of the analysis phase of the idEa templates 42. The analysis templates 60, in their aggregate, combine to perform a complete front end analysis. The analysis templates comprise nine different categories as follows: (1) Needs Assessment 62, with specific templates designated A00l through A009 in column 1 of FIG. 9B; (2) Needs Analysis 64, with specific templates designated A0lO through A019 in column 2 of FIG. 9B; (3) Education Analysis 66, with specific templates designated A020 through A026 in FIG. 9C; (4) Learning Analysis 68, with specific templates designated A030 through A038 in FIG. 9D; (5) Job Analysis 70, with specific templates designated A040 through A049 in FIG. 9E; (6) Task Analysis 72, with specific templates designated A050 through A058 in FIG. 9F; (7) Leamer Analysis 74, with specific templates designated A060 through A064 in FIG. 9G; (8) Resource Analysis 76, with specific templates designated A070 through A076 in FIG. 9H; and (9) Existing Materials Analysis 78, with specific templates designated A080 through A088 in FIG. 91."; Paragraph 71, "As shown in FIG. 14A, the user interface includes a menu of radio buttons for selecting the desired category. For example, selecting Needs Assessment 62 and clicking the Next button leads to the screen in FIG. 14B, which is a list of subtasks useful for Needs Assessment. Clicking on the radio button for the subtask "Decide on the scope of needs assessment and methodology" and clicking the Next button leads the user to the screen shown in FIGS. 14C and 14D, which contains a template form using a variety of devices for soliciting information from the user, e.g., radio buttons, check boxes, text windows, etc. Each template form is an object, and each subtask is an object. The user's responses are saved as serialized objects in Java or as HTML pages when the user exits the template section, using the standard pull down menu bars at the top of each screen. In the same manner, users may open a saved file for further editing either from a file system or from the Web using version control technology such as Webdav explorer."; Paragraph 73, "FIGS. lOA, lOB, lOC, 10D, lOE, lOF, lOG, lOH, lOI, lOJ, and lOK combine to form a flow chart of the design phase of the idEa templates 42. The design templates 80, in their aggregate, combine to reflect a complete design process that is extensive and inclusive of most (if not all) existing delivery platforms/systems as well as instructional strategies and methods. There are four main sections in the design templates 80: (1) Develop and Sequence Objectives 82, comprising a number of templates, as shown in FIGS. lOB, lOC, 10D, and lOE; (2) Specify Instructional Strategies and Methods 84, comprising a number of templates, as shown in FIGS. lOF and lOG; (3) Evaluate Instructional Objectives 86, comprising a number of templates, as shown in FIG. lOH; and (4) Examine Organizational Issues 88, comprising a number of templates, as shown in FIGS. 1 OI, 1 OJ and 1 OK. The user interface offering the user the opportunity to select the desired section for editing is shown in FIG. 15A. If users have completed analysis templates 60, the information is carried over into the design templates 80. If users have not completed analysis templates 60, users may still start with the design templates 80, supplying whatever missing information that would have been gathered in the analysis templates 60 and compiled, weighted, calculated, filtered and sorted by processor 52. An entry screen for selecting the appropriate option is shown in FIG. 15B. Like the analysis templates, in the design templates the user is presented with a series of screens which progressively narrow the scope of the task (FIGS. 15C-15D) until presented with a template form (FIGS. 15E-15F) for user input. Help is available at each step (e.g., FIG. 15G), and the user is prompted to save the information input before proceeding with the next section of the design templates (FIGS. 15H-15I) Input provided in the design templates 80 is compiled, weighted, calculated, filtered and sorted throughout the process and distributed appropriately within the design templates 80. The results of the design templates 80 are carried over as input into the designer's toolkit 44.") automatically generating by the one or more processors…the one or more training modules using the one or more training templates based on the identified content. (Bass: Paragraph 81, "The user has the option to add modules by selecting the "Add Module" 130 item from the "Course Items" 128 pull down menu as seen in FIG. 16B, or by clicking the icon 130 from the toolbar on the left side of the screen. FIG. 161 shows a sample start of a module in Web Author, including such objects as the Module Title, Summary and module Objectives. FIG. 16J illustrates the options available to the user when creating a new module, including adding a Page Title, Text, choosing, naming, and sizing an Image, choosing and naming an Audio File, adding HTML links 164 and choosing Page Layout 162. The toolbar along the left side of the screen offers easy access to additional Course Items 128 selections through clickable icons for adding modules 130, pages 132, HTML pages 134, tests 136, questions 138, answers 140, and deleting 142 items. When the "Change the Page Layout" button is selected, the user is presented with a screen similar to FIG. 16G which allows the user to select the page layout from a group of layouts which feature text with a graphics file or a multimedia in a selected position on the screen, a text only layout, or a multimedia file only layout."; Paragraph 88, "The control layer also tracks students' test scores as they navigate through the course. These scores are then sent to the SCORM API Adapter (described below). The scores are sent to the API (Application Program Interface) by calling the LMSSetValue function located in the SCORM API (the SCORM API is a published Launch and Communications API that provides common interface functions between a course and a Learning Management System (LMS) and was developed by AICC members in collaboration with the Department of Defense's Advanced Distributed Learning (ADL) initiative, and represents a series of functions well known to those skilled in the art)."; Paragraph 100, "With Java Web Start technology, which works with virtually all Web servers, the application service providers (ASP), either internally to the company or externally on the Web, can easily supply a full-featured application to users. Initially, using the application version is slower, since it needs to be downloaded. This will typically take time in the order of minutes, which is high compared to the order of seconds for HTML. However, this is only a "first-time activation" cost. For subsequent uses, the application is cached locally and launches as quickly as any other local application. Consequently, users need only to save updated data files to the server.") Deolalikar in view of Bass do not explicitly disclose the use of LLMs for analyzing vectorial representations for documents, however, in analogous art of generating documentation, Marom discloses the following: converting, by the one or more processors using one or more Language Learning Models (LLMs) deployed in a cloud platform, content of the one or more documents into one or more vectorial representations (Marom: Paragraph 46, “In some embodiments, the intelligent document system 102 includes or refers to a machine learning model (e.g., in context of the large language model, in some embodiments, the intelligent document system 102 utilizes a general machine learning model trained for natural language tasks). In one or more embodiments a “machine learning model” includes a computer algorithm or a collection of computer algorithms that can be trained and/or tuned based on inputs to approximate unknown functions. For example, a machine learning model can include a computer algorithm with branches, weights, or parameters that changed based on training data to improve for a particular task. Thus, a machine learning model can utilize one or more learning techniques to improve in accuracy and/or effectiveness. Example machine learning models include various types of decision trees, support vector machines, Bayesian networks, random forest models, or neural networks (e.g., deep neural networks).”; Paragraph 113, “Specifically, the subset of the digital documents 708 includes digital documents that satisfy a similarity threshold with the digital document 706. For instance, the intelligent document system 102 can calculate a cosine similarity, a Euclidean distance, and/or generate word embeddings of the digital document 706 and compare the digital document 706 with the digital documents 704. In doing so, the intelligent document system 102 can establish a cut-off mark based on the various comparison methods (e.g., cosine similarity, Euclidian distance, or word embeddings) that indicates a satisfaction of similarity.”; Paragraph 158, “Furthermore, from content items 1309, the intelligent document system 102 utilizes a similarity threshold to identify a subset of the content items 1309 that satisfy the similarity threshold 1310. For instance, similar to the discussion above, the intelligent document system 102 can employ cosine similarity, Euclidean distance, or a vector embedding space to determine the similarity of the content items 1309 with the digital document at hand. As further shown in FIG. 13B, based on both the features 1308 and the subset of the content items 1309, the intelligent document system 102 generates a suggested document modification element 1312.”) …using one or more language learning models (LLM)s…and the one or more vectorial representations; (Marom: Paragraph 46, “In some embodiments, the intelligent document system 102 includes or refers to a machine learning model (e.g., in context of the large language model, in some embodiments, the intelligent document system 102 utilizes a general machine learning model trained for natural language tasks). In one or more embodiments a “machine learning model” includes a computer algorithm or a collection of computer algorithms that can be trained and/or tuned based on inputs to approximate unknown functions. For example, a machine learning model can include a computer algorithm with branches, weights, or parameters that changed based on training data to improve for a particular task. Thus, a machine learning model can utilize one or more learning techniques to improve in accuracy and/or effectiveness. Example machine learning models include various types of decision trees, support vector machines, Bayesian networks, random forest models, or neural networks (e.g., deep neural networks).”; Paragraph 113, “Specifically, the subset of the digital documents 708 includes digital documents that satisfy a similarity threshold with the digital document 706. For instance, the intelligent document system 102 can calculate a cosine similarity, a Euclidean distance, and/or generate word embeddings of the digital document 706 and compare the digital document 706 with the digital documents 704. In doing so, the intelligent document system 102 can establish a cut-off mark based on the various comparison methods (e.g., cosine similarity, Euclidian distance, or word embeddings) that indicates a satisfaction of similarity.”; Paragraph 158, “Furthermore, from content items 1309, the intelligent document system 102 utilizes a similarity threshold to identify a subset of the content items 1309 that satisfy the similarity threshold 1310. For instance, similar to the discussion above, the intelligent document system 102 can employ cosine similarity, Euclidean distance, or a vector embedding space to determine the similarity of the content items 1309 with the digital document at hand. As further shown in FIG. 13B, based on both the features 1308 and the subset of the content items 1309, the intelligent document system 102 generates a suggested document modification element 1312.”) …using the one or more LLMs… (Marom: Paragraph 46, “In some embodiments, the intelligent document system 102 includes or refers to a machine learning model (e.g., in context of the large language model, in some embodiments, the intelligent document system 102 utilizes a general machine learning model trained for natural language tasks). In one or more embodiments a “machine learning model” includes a computer algorithm or a collection of computer algorithms that can be trained and/or tuned based on inputs to approximate unknown functions. For example, a machine learning model can include a computer algorithm with branches, weights, or parameters that changed based on training data to improve for a particular task. Thus, a machine learning model can utilize one or more learning techniques to improve in accuracy and/or effectiveness. Example machine learning models include various types of decision trees, support vector machines, Bayesian networks, random forest models, or neural networks (e.g., deep neural networks).”; Paragraph 113, “Specifically, the subset of the digital documents 708 includes digital documents that satisfy a similarity threshold with the digital document 706. For instance, the intelligent document system 102 can calculate a cosine similarity, a Euclidean distance, and/or generate word embeddings of the digital document 706 and compare the digital document 706 with the digital documents 704. In doing so, the intelligent document system 102 can establish a cut-off mark based on the various comparison methods (e.g., cosine similarity, Euclidian distance, or word embeddings) that indicates a satisfaction of similarity.”; Paragraph 158, “Furthermore, from content items 1309, the intelligent document system 102 utilizes a similarity threshold to identify a subset of the content items 1309 that satisfy the similarity threshold 1310. For instance, similar to the discussion above, the intelligent document system 102 can employ cosine similarity, Euclidean distance, or a vector embedding space to determine the similarity of the content items 1309 with the digital document at hand. As further shown in FIG. 13B, based on both the features 1308 and the subset of the content items 1309, the intelligent document system 102 generates a suggested document modification element 1312.”) …using the one or more LLMs… (Marom: Paragraph 46, “In some embodiments, the intelligent document system 102 includes or refers to a machine learning model (e.g., in context of the large language model, in some embodiments, the intelligent document system 102 utilizes a general machine learning model trained for natural language tasks). In one or more embodiments a “machine learning model” includes a computer algorithm or a collection of computer algorithms that can be trained and/or tuned based on inputs to approximate unknown functions. For example, a machine learning model can include a computer algorithm with branches, weights, or parameters that changed based on training data to improve for a particular task. Thus, a machine learning model can utilize one or more learning techniques to improve in accuracy and/or effectiveness. Example machine learning models include various types of decision trees, support vector machines, Bayesian networks, random forest models, or neural networks (e.g., deep neural networks).”; Paragraph 113, “Specifically, the subset of the digital documents 708 includes digital documents that satisfy a similarity threshold with the digital document 706. For instance, the intelligent document system 102 can calculate a cosine similarity, a Euclidean distance, and/or generate word embeddings of the digital document 706 and compare the digital document 706 with the digital documents 704. In doing so, the intelligent document system 102 can establish a cut-off mark based on the various comparison methods (e.g., cosine similarity, Euclidian distance, or word embeddings) that indicates a satisfaction of similarity.”; Paragraph 158, “Furthermore, from content items 1309, the intelligent document system 102 utilizes a similarity threshold to identify a subset of the content items 1309 that satisfy the similarity threshold 1310. For instance, similar to the discussion above, the intelligent document system 102 can employ cosine similarity, Euclidean distance, or a vector embedding space to determine the similarity of the content items 1309 with the digital document at hand. As further shown in FIG. 13B, based on both the features 1308 and the subset of the content items 1309, the intelligent document system 102 generates a suggested document modification element 1312.”) selecting by the one or more processors, one or more vectorial representations based at least on a threshold provided in the one or more specifications of the second input; (Marom: Paragraph 46, “In some embodiments, the intelligent document system 102 includes or refers to a machine learning model (e.g., in context of the large language model, in some embodiments, the intelligent document system 102 utilizes a general machine learning model trained for natural language tasks). In one or more embodiments a “machine learning model” includes a computer algorithm or a collection of computer algorithms that can be trained and/or tuned based on inputs to approximate unknown functions. For example, a machine learning model can include a computer algorithm with branches, weights, or parameters that changed based on training data to improve for a particular task. Thus, a machine learning model can utilize one or more learning techniques to improve in accuracy and/or effectiveness. Example machine learning models include various types of decision trees, support vector machines, Bayesian networks, random forest models, or neural networks (e.g., deep neural networks).”; Paragraph 113, “Specifically, the subset of the digital documents 708 includes digital documents that satisfy a similarity threshold with the digital document 706. For instance, the intelligent document system 102 can calculate a cosine similarity, a Euclidean distance, and/or generate word embeddings of the digital document 706 and compare the digital document 706 with the digital documents 704. In doing so, the intelligent document system 102 can establish a cut-off mark based on the various comparison methods (e.g., cosine similarity, Euclidian distance, or word embeddings) that indicates a satisfaction of similarity.”; Paragraph 118, “In addition to the information associated with the recipient device 802, the intelligent document system 102 also accesses information from similar user accounts. As shown in FIG. 8, the intelligent document system 102 identifies user account(s) 804 (e.g., other user accounts part of the environment of the content management system 108 and the intelligent document system 102). Further, the intelligent document system 102 utilizes a similarity threshold 806 to identify a subset of user accounts deemed to satisfy the similarity threshold relative to the user account 800. In other words, the intelligent document system 102 identifies a subset of user accounts that have similar historical activity to the user account 800.”; Paragraph 158, “Furthermore, from content items 1309, the intelligent document system 102 utilizes a similarity threshold to identify a subset of the content items 1309 that satisfy the similarity threshold 1310. For instance, similar to the discussion above, the intelligent document system 102 can employ cosine similarity, Euclidean distance, or a vector embedding space to determine the similarity of the content items 1309 with the digital document at hand. As further shown in FIG. 13B, based on both the features 1308 and the subset of the content items 1309, the intelligent document system 102 generates a suggested document modification element 1312.”; Paragraph 166, “Moreover, in some embodiments the intelligent document system 102 performs a similarity comparison between the current digital document and the previously signed digital documents. In some such embodiments, for previously signed digital documents that satisfy a similarity threshold (e.g., 90% similarity) the intelligent document system 102 performs an act 1402 of generating the digital document with highlighted portions. For instance, as shown in FIG. 14, the intelligent document system 102 generates a file view 1404 of the digital document and further provides an indication 1406 (e.g., a highlighted portion) of the portion of the digital document that differs from a previously signed digital document.”; Paragraph 180, “Moreover, in one or more embodiments the series of acts 1500 includes generating, for the digital document, a legitimacy score that indicates the digital document satisfying a similarity threshold for additional digital documents that previously received digital signatures. Further, in one or more embodiments the series of acts 1500 includes identifying at least one account that satisfies a similarity threshold with a recipient account associated with a recipient device. Further, in one or more embodiments, the series of acts 1500 includes determining a task completion metric associated with the at least one account. Moreover, in one or more embodiments the series of acts 1500 includes persisting a state of the interpreter corresponding to the first output. Further, in one or more embodiments the series of acts 1500 includes generating a time prediction for the recipient device completing a task related to the digital document based on the task completion metric.”) identifying using the selected one or more vectorial representations, content relevant to the one or more topics: (Marom: Paragraph 46, “In some embodiments, the intelligent document system 102 includes or refers to a machine learning model (e.g., in context of the large language model, in some embodiments, the intelligent document system 102 utilizes a general machine learning model trained for natural language tasks). In one or more embodiments a “machine learning model” includes a computer algorithm or a collection of computer algorithms that can be trained and/or tuned based on inputs to approximate unknown functions. For example, a machine learning model can include a computer algorithm with branches, weights, or parameters that changed based on training data to improve for a particular task. Thus, a machine learning model can utilize one or more learning techniques to improve in accuracy and/or effectiveness. Example machine learning models include various types of decision trees, support vector machines, Bayesian networks, random forest models, or neural networks (e.g., deep neural networks).”; Paragraph 113, “Specifically, the subset of the digital documents 708 includes digital documents that satisfy a similarity threshold with the digital document 706. For instance, the intelligent document system 102 can calculate a cosine similarity, a Euclidean distance, and/or generate word embeddings of the digital document 706 and compare the digital document 706 with the digital documents 704. In doing so, the intelligent document system 102 can establish a cut-off mark based on the various comparison methods (e.g., cosine similarity, Euclidian distance, or word embeddings) that indicates a satisfaction of similarity.”; Paragraph 118, “In addition to the information associated with the recipient device 802, the intelligent document system 102 also accesses information from similar user accounts. As shown in FIG. 8, the intelligent document system 102 identifies user account(s) 804 (e.g., other user accounts part of the environment of the content management system 108 and the intelligent document system 102). Further, the intelligent document system 102 utilizes a similarity threshold 806 to identify a subset of user accounts deemed to satisfy the similarity threshold relative to the user account 800. In other words, the intelligent document system 102 identifies a subset of user accounts that have similar historical activity to the user account 800.”; Paragraph 158, “Furthermore, from content items 1309, the intelligent document system 102 utilizes a similarity threshold to identify a subset of the content items 1309 that satisfy the similarity threshold 1310. For instance, similar to the discussion above, the intelligent document system 102 can employ cosine similarity, Euclidean distance, or a vector embedding space to determine the similarity of the content items 1309 with the digital document at hand. As further shown in FIG. 13B, based on both the features 1308 and the subset of the content items 1309, the intelligent document system 102 generates a suggested document modification element 1312.”; Paragraph 166, “Moreover, in some embodiments the intelligent document system 102 performs a similarity comparison between the current digital document and the previously signed digital documents. In some such embodiments, for previously signed digital documents that satisfy a similarity threshold (e.g., 90% similarity) the intelligent document system 102 performs an act 1402 of generating the digital document with highlighted portions. For instance, as shown in FIG. 14, the intelligent document system 102 generates a file view 1404 of the digital document and further provides an indication 1406 (e.g., a highlighted portion) of the portion of the digital document that differs from a previously signed digital document.”; Paragraph 180, “Moreover, in one or more embodiments the series of acts 1500 includes generating, for the digital document, a legitimacy score that indicates the digital document satisfying a similarity threshold for additional digital documents that previously received digital signatures. Further, in one or more embodiments the series of acts 1500 includes identifying at least one account that satisfies a similarity threshold with a recipient account associated with a recipient device. Further, in one or more embodiments, the series of acts 1500 includes determining a task completion metric associated with the at least one account. Moreover, in one or more embodiments the series of acts 1500 includes persisting a state of the interpreter corresponding to the first output. Further, in one or more embodiments the series of acts 1500 includes generating a time prediction for the recipient device completing a task related to the digital document based on the task completion metric.”) Deolalikar discloses a method for extracting and classifying topics and content from documents for assembly into training materials. Bass discloses a method for creating templates and modules using available training materials. Marom discloses a method of using LLMs to identify similar documents using vectors to generate new documentation. At the time of Applicant's filed invention one of ordinary skill in the art would have deemed it obvious to combine the methods of Deolalikar with the teachings of Bass in order to improve the efficiency and ease of generation of training modules as disclosed by Bass (Bass: Paragraph 6: "However, none of these tools or applications provides a seamless, open, scalable and expandable environment for working and learning and which allows organizations to "plug-in" tools and applications that they have already invested in as well as to produce new tools and applications they will use in the future."). It would have been further to one of ordinary skill in the art to combine the methods of Deolalikar in view of Bass with the teachings of Marom in order to improve the efficiency of document creation as disclosed by Marom (Marom: Paragraph 36, “For example, the intelligent document system can provide improved efficiency. For instance, the intelligent document system eliminates excessive user interactions from a requestor shuffling between multiple different user interfaces and/or applications to obtain relevant content for generating a digital document.”) Claim(s) 2, 10, and 18 – Deolalikar in view of Bass and Marom disclose the limitations of claims 1, 9, and 17 Deolalikar further discloses the following: receiving the one or more documents from one or more users associated with the facility, wherein the one or more documents comprises at least one of: one or more manuals, one or more research papers, one or more white papers, one or more journals, one or more standard operating procedures, and one or more customer submitted documents; and (Deolalikar: Paragraph 1, "The amount of documents containing text has exponentially grown since the advent of computer networking. Individuals and business entities are disseminating more and more information in the form of textual documents via networks such as the Internet. These textual documents may be associated with a myriad of individual and corporate activities including, for example, the sale of goods and services, the reporting of news, and, in general, the sharing of ideas."; Paragraph 36, "Throughout the below description, an example of classifying a number of documents in a news reporting scenario is described in which a number of people such as reporters prepare a number of textual documents. After being produced, these textual documents are classified by a machine learning classifying device in order to obtain a number of cataloged textual documents arranged by topics such as, for example, economy, sports, and politics, among many other topics.") storing the one or more documents related to the one or more operations in the database, wherein the one or more operations are related to at least one of: research and development, manufacturing, shipping, material handling, legal, complaints, and quality assurance domain of the facility. (Deolalikar: Paragraph 1, "The amount of documents containing text has exponentially grown since the advent of computer networking. Individuals and business entities are disseminating more and more information in the form of textual documents via networks such as the Internet. These textual documents may be associated with a myriad of individual and corporate activities including, for example, the sale of goods and services, the reporting of news, and, in general, the sharing of ideas."; Paragraph 36, "Throughout the below description, an example of classifying a number of documents in a news reporting scenario is described in which a number of people such as reporters prepare a number of textual documents. After being produced, these textual documents are classified by a machine learning classifying device in order to obtain a number of cataloged textual documents arranged by topics such as, for example, economy, sports, and politics, among many other topics.") Claim(s) 3, 11, and 19 – Deolalikar in view of Bass and Marom disclose the limitations of claims 1, 9, and 17 Deolalikar does not explicitly disclose the following, however, in analogous art of document generation, Bass teaches the following: receiving, via the user interface, at least one of: an approval from the user in the facility for addition of at least one document into the database in response to review of the one or more documents, one or more comments on the one or more documents, and one or more requirements associated with the one or more documents, wherein the one or more requirements comprise at least one of: dimension, versioning, and water mark related to the one or more documents. (Bass: Paragraph 80, "In the example shown in FIG. 16F, the text appearing in the text boxes labeled "Course Title" and "Introduction" in the content panel is information which has automatically been carried over from the designer/developer templates, and may be further edited by the user if desired. The content panel also contains an advanced feature button 158, and a display of the current skin 160 together with a "Change Skin" button. The skin 160 shows the external frame of the user interface in which the course will be displayed. When the "Change Skin" button is selected, a screen similar to that depicted in FIG. 16G appears, which allows the user to selected the desired skin by clicking on one of the selections displayed. When the advanced feature button 158 is selected, a screen similar to that shown in FIG. 16H appears, which permits the user to enter metadata in the text boxes in the content panel, including the course Summary, Objectives, Cost, Version number, Copyright information, and Keywords."; Paragraph 84, "After editing the course materials, the user can save the file as an XML file with an .iwa file extension for further editing by selecting the "Save Course" or "Save As" items from the File menu 124 or toolbar, preview either the current page or module by selecting "Preview" or the entire course by selecting "Preview All", or the user may create the course file by selecting "Export" or "Export All". This causes the .iwa file to be compiled by the Java compiler to create an HTML course file, which may be saved to a designated location. The course may be put on the user's hard drive, saved to a CD, Zip disk, or other storage medium, put on a corporate intranet, or uploaded to a learning management system (LMS).") Deolalikar discloses a method for extracting and classifying topics and content from documents for assembly into training materials. Bass discloses a method for creating templates and modules using available training materials. Marom discloses a method of using LLMs to identify similar documents using vectors to generate new documentation. At the time of Applicant's filed invention one of ordinary skill in the art would have deemed it obvious to combine the methods of Deolalikar with the teachings of Bass in order to improve the efficiency and ease of generation of training modules as disclosed by Bass (Bass: Paragraph 6: "However, none of these tools or applications provides a seamless, open, scalable and expandable environment for working and learning and which allows organizations to "plug-in" tools and applications that they have already invested in as well as to produce new tools and applications they will use in the future."). Claim(s) 4, 12, and 20 – Deolalikar in view of Bass and Marom disclose the limitations of claims 1, 9, and 17 Deolalikar further discloses the following: converting content in the one or more documents into the one or more vectorial representations, wherein the content in the one or more documents is at least one of: a video content, an audio content, and a textual content; (Deolalikar: Paragraph 2, "Classification of these textual documents by topic, for example, may assist in the archiving, retrieval, and dissemination of the textual documents. In this manner, an interested individual may obtain a copy of a number of textual documents associated with a particular topic. However, classification of textual documents by, for example, their topic is extremely time consuming and cost­ineffective for an individual or business entity even with the assistance of a computing device. In order to classify a textual document, typically an individual reads or reviews the textual document and stores that document in a manner that indicates that the textual document belongs to a particular topic, or a computing system searches for key words within the document to sort that document by topic."; Paragraph 13, "More specifically, the present systems and methods provide for generation of training documents for training a classifying device. The method may comprise, with a processor, sampling from a distribution of words in a number of original documents, and creating a number of pseudo-documents from the distribution of words, the pseudo-documents comprising a similar distribution of words as the original documents. A device for classifying textual documents may comprise a processor, and a memory communicatively coupled to the processor. The memory may comprise a sampling module to, when executed by the processor, determine the distribution of words in a number of original documents, a pseudo-document creation module to, when executed by the processor, create a number of pseudo-documents from the distribution of words, the pseudo-documents comprising a similar distribution of words as the original documents, and a training module to, when executed by the processor, train the device to classify textual documents based on the pseudo-documents.") identifying, based at least on the one or more vectorial representations, the one or more topics in the one or more documents; and (Deolalikar: Paragraph 19, "Extensive empirical studies included herein show that BIDS consistently improves the accuracy of naive Bayes. This is true especially when training data is very scarce. Naive Bayes utilizing the present BIDS systems and methods beats support vector machines (SVM) in accuracy on a majority of standard benchmarks. BIDS is a general approach that can be applied to other problems in Bayesian classification such as learning from imbalanced training data. BIDS may also be used to augment meta-learners such as boosting, bagging, and semi-supervised learning."; Paragraph 56, "Two variants of the MNB model which differ in how they treat word frequencies may be utilized by the present systems and methods. The first of these two variants of the MNB model may be called an Integer Multinomial (IMN) event model. In this model, let IDI denote the number of tokens, counted with multiplicity, in document D. Furthermore, let V={ t.sub.1, ... , t.sub.lVI} be the vocabulary of category c. In the IMN model, a document D is represented as a vector D={x.sub.1, x.sub.2, ... , x.sub.lVI}, where x.sub.i is the number of occurrences of token t.sub.i in D. The generative model is as follows: each document D in class c is seen as being generated by picking IDI tokens independently, with probability of picking the token t.sub.i given by P(t.sub.ilc ). Therefore, the probability P(Dlc) of the document D arising from the class c under this generative model is given by the multinomial distribution as follows:"; Paragraph 86, “Finally, a linear support vector machine (SVM) is used as a benchmark. SVMs generally offer the best accuracy in text classification, and linear SVMs are almost as accurate as those with more complex kernels for this task. The SVM-LIGHT package developed at Cornell University was used for the experiments. For tuning, the single "penalty factor" C is varied through the values [10.sup.-4; 10.sup.-3; 10.sup.-2; 10.sup.-1; 1; 10.sup.1; 10.sup.2; 10.sup.3; 10.sup.4] and adopt the value with the highest accuracy.”) classifying the one or more documents based on the one or more extracted topics. (Deolalikar: Paragraph 62, "The idea of statistical language models is that a piece of text is viewed as an instantiation of an underlying probabilistic model. The "naive" in the Naive Bayes classifier comes from the assumption that the occurrence of words is independent given the class, and is, therefore, determined entirely by the P(t.sub.ilc). As demonstrated above, this results in a multinomial generative model. At this point, the correspondence with Bootstrap re-sampling is introduced. Bootstrap re-sampling also is, effectively, a multinomial sampling since it assumes independence of samples. In other words, the "sampling design" for Bootstrap re-sampling matches that for multinomial Naive Bayes. A text document is viewed as being an instantiation of an underlying simple language model. Because of this additional Bootstrap samples from this document may be drawn since the generative model of the text for Naive Bayes classification matches the generative model of the Bootstrap re-sampling. Therefore, more "documents" are drawn from this one sample, and this will provide the system with as many such "documents" as necessary."; Paragraph 66, "The property that mates BIDS to Naive Bayes is that the latter uses a parameter estimate of the underlying language model in order to build the classifier. In a parameter estimation problem where the sampling design matches Bootstrap sampling, a more robust parameter estimate is obtained by using Bootstrap re-samples of the document. This is because the inter-sample variations that result from the Bootstrap allow the classifier to learn the parameters of the underlying model with more generality, and avoid over fitting to the single sample (D) that was drawn from it initially. In other words, in addition to the estimates of the form { circumflex over (P) }(t.sub.ilc) in Eq. 3 above, estimates") Claim(s) 5 and 13 – Deolalikar in view of Bass and Marom disclose the limitations of claims 1 and 9 Deolalikar does not explicitly disclose the following, however, in analogous art of training materials, Bass teaches the following: generate the one or more training templates with one or more fields based on the one or more topics, wherein the one or more fields are based on at least one of: the one or more operations of the facility, one or more user requirements, one or more training requirements, and content in the one or more documents; (Bass: Paragraph 60, "As shown in FIG. lA, the architecture and infrastructure/framework (referred to as Archistructure.TM., a trademark of PLS Global) includes three main components, viz., designer/developer tools 32, student tools 34 (exported courses from the Web-based Designer ToolKit), and administration/CMI/LMS tools 36, (state-of the-art tools that launch designer/developer tools and Web Author exported courses). As shown in FIG. 1B, the designer/developer tools 32 include an assortment of objects, such as authoring tools, database tools, advisory tools, learning tools, etc. Student tools 34 comprise courses exported from Web Author, etc. Administration/CMI/LMS tools 36 include registration tools, tracking tools, assessment tools, scoring tools, reporting tools and scheduling tools that launch designer/developer tools and Web Author exported courses. As shown in FIG. lC, the architecture may be broadly divided into a set of tools and a set of utilities. The tools include the idEa.TM. (a trademark of PLS Global) tools 38 which include guidelines 40 and templates 42, a designer's toolkit 44, and authoring templates 46 from Web Author (the Web­based Designer ToolKit). The utilities include collaboration vehicles 48, access to administration tools 36, and access to study/organization tools 50, including student tools 34, e.g., exported courses from Web Author 46."; Paragraph 70, "FIGS. 9A, 9B, 9C, 9D, 9E, 9F, 9G, 9H and 91 combine to form a flow chart of the analysis phase of the idEa templates 42. The analysis templates 60, in their aggregate, combine to perform a complete front end analysis. The analysis templates comprise nine different categories as follows: (1) Needs Assessment 62, with specific templates designated A00l through A009 in column 1 of FIG. 9B; (2) Needs Analysis 64, with specific templates designated A0lO through A019 in column 2 of FIG. 9B; (3) Education Analysis 66, with specific templates designated A020 through A026 in FIG. 9C; (4) Learning Analysis 68, with specific templates designated A030 through A038 in FIG. 9D; (5) Job Analysis 70, with specific templates designated A040 through A049 in FIG. 9E; (6) Task Analysis 72, with specific templates designated A050 through A058 in FIG. 9F; (7) Leamer Analysis 74, with specific templates designated A060 through A064 in FIG. 9G; (8) Resource Analysis 76, with specific templates designated A070 through A076 in FIG. 9H; and (9) Existing Materials Analysis 78, with specific templates designated A080 through A088 in FIG. 91.") and classify the one or more training templates into one or more categories, wherein the one or more categories comprises at least one of: novice templates, intermediate templates, expert templates, templates with visuals or videos, operations specific templates, customer specific templates, and topic specific templates. (Bass: Paragraph 68, "FIGS. 3A, 3B, and 3C show the menu and database structure of the guidelines 40. Users register and log in to the guidelines 40. The user interface for the log-in screen is shown in FIG. 13A as viewed with Microsoft's Internet Explorer browser. Users may work through the guidelines 40 one section at a time, e.g., the analysis section. Those who are new to instructional design can start at the beginning and work through the entire program in a tutorial mode to the point where they can build their own program. They can view the entire contents of each section by clicking on every link on a screen. If there are no links on a screen, they click "next" and navigate through the next section. These aspects of the user interface are shown in the guidelines welcome screen in FIG. 13B, along with user selectable buttons on the side of the screen which offer access to such additional features as bookmarks 102 (illustrated more fully in FIG. 13E), ID Process Diagrams 104 (e.g., FIG. 13F; each block in the diagram is linked to the first page of the section, so that clicking on the analysis block takes the user to the first analysis screen (FIG. 13G), etc.), Notes 110 (e.g., FIGS. 13H and 131), a glossary 120 (e.g., FIGS. 13J and 13K) and ID Process Help 54 for help on guideline content (e.g., as seen in FIG. 13L) or system help 122 for help on navigating features (e.g., as seen in FIG. 13M). Users can bookmark their place before exiting the program. Users can also bookmark an unlimited number of screens throughout their viewing of guidelines. Bookmarks can be easily added, printed, or deleted. Users can create, save, print, and delete notes. The glossary displays a list of glossary terms along with a frame to display the glossary definition of the selected term. It allows users to jump to the first letter of a word using the alphabet buttons. Users can select words in the glossary by scrolling in the "terms" frame. Users also access the glossary from the guidelines by clicking on bold, underscored words 120, as shown in FIG. 13D. Experienced instructional designers who want to know about a specific topic, e.g., how to design and develop Web-based training or job aids that are Web-based, will use ID Help 54, select the topic, and go directly to that section of the guidelines 40."; Paragraph 70, "FIGS. 9A, 9B, 9C, 9D, 9E, 9F, 9G, 9H and 91 combine to form a flow chart of the analysis phase of the idEa templates 42. The analysis templates 60, in their aggregate, combine to perform a complete front end analysis. The analysis templates comprise nine different categories as follows: (1) Needs Assessment 62, with specific templates designated A00l through A009 in column 1 of FIG. 9B; (2) Needs Analysis 64, with specific templates designated A0lO through A019 in column 2 of FIG. 9B; (3) Education Analysis 66, with specific templates designated A020 through A026 in FIG. 9C; (4) Learning Analysis 68, with specific templates designated A030 through A038 in FIG. 9D; (5) Job Analysis 70, with specific templates designated A040 through A049 in FIG. 9E; (6) Task Analysis 72, with specific templates designated A050 through A058 in FIG. 9F; (7) Learner Analysis 74, with specific templates designated A060 through A064 in FIG. 9G; (8) Resource Analysis 76, with specific templates designated A070 through A076 in FIG. 9H; and (9) Existing Materials Analysis 78, with specific templates designated A080 through A088 in FIG. 91."; Paragraph 71, "As shown in FIG. 14A, the user interface includes a menu of radio buttons for selecting the desired category. For example, selecting Needs Assessment 62 and clicking the Next button leads to the screen in FIG. 14B, which is a list of subtasks useful for Needs Assessment. Clicking on the radio button for the subtask "Decide on the scope of needs assessment and methodology" and clicking the Next button leads the user to the screen shown in FIGS. 14C and 14D, which contains a template form using a variety of devices for soliciting information from the user, e.g., radio buttons, check boxes, text windows, etc. Each template form is an object, and each subtask is an object. The user's responses are saved as serialized objects in Java or as HTML pages when the user exits the template section, using the standard pull down menu bars at the top of each screen. In the same manner, users may open a saved file for further editing either from a file system or from the Web using version control technology such as Webdav explorer.") Deolalikar discloses a method for extracting and classifying topics and content from documents for assembly into training materials. Bass discloses a method for creating templates and modules using available training materials. Marom discloses a method of using LLMs to identify similar documents using vectors to generate new documentation. At the time of Applicant's filed invention one of ordinary skill in the art would have deemed it obvious to combine the methods of Deolalikar with the teachings of Bass in order to improve the efficiency and ease of generation of training modules as disclosed by Bass (Bass: Paragraph 6: "However, none of these tools or applications provides a seamless, open, scalable and expandable environment for working and learning and which allows organizations to "plug-in" tools and applications that they have already invested in as well as to produce new tools and applications they will use in the future."). Claim(s) 6 and 14 – Deolalikar in view of Bass and Marom disclose the limitations of claims 1 and 9 Deolalikar does not explicitly disclose the following, however, in analogous art of training materials, Bass teaches the following: receive, via the user interface, the one or more specifications that comprises at least one of: a description of training, a type of training, a level of training, an operation for which training is required, a worker or group of workforces that is to be trained, and a threshold for selecting content from the one or more documents.(Bass: Paragraph 80, "In the example shown in FIG. 16F, the text appearing in the text boxes labeled "Course Title" and "Introduction" in the content panel is information which has automatically been carried over from the designer/developer templates, and may be further edited by the user if desired. The content panel also contains an advanced feature button 158, and a display of the current skin 160 together with a "Change Skin" button. The skin 160 shows the external frame of the user interface in which the course will be displayed. When the "Change Skin" button is selected, a screen similar to that depicted in FIG. 16G appears, which allows the user to selected the desired skin by clicking on one of the selections displayed. When the advanced feature button 158 is selected, a screen similar to that shown in FIG. 16H appears, which permits the user to enter metadata in the text boxes in the content panel, including the course Summary, Objectives, Cost, Version number, Copyright information, and Keywords.") Deolalikar discloses a method for extracting and classifying topics and content from documents for assembly into training materials. Bass discloses a method for creating templates and modules using available training materials. Marom discloses a method of using LLMs to identify similar documents using vectors to generate new documentation. At the time of Applicant's filed invention one of ordinary skill in the art would have deemed it obvious to combine the methods of Deolalikar with the teachings of Bass in order to improve the efficiency and ease of generation of training modules as disclosed by Bass (Bass: Paragraph 6: "However, none of these tools or applications provides a seamless, open, scalable and expandable environment for working and learning and which allows organizations to "plug-in" tools and applications that they have already invested in as well as to produce new tools and applications they will use in the future."). Claim(s) 7 and 15 – Deolalikar in view of Bass and Marom disclose the limitations of claims 1 and 9 Deolalikar further discloses the following: identifying content using one or more vectorial representations of the one or more documents; and (Deolalikar: Paragraph 19, "Extensive empirical studies included herein show that BIDS consistently improves the accuracy of naive Bayes. This is true especially when training data is very scarce. Naive Bayes utilizing the present BIDS systems and methods beats support vector machines (SVM) in accuracy on a majority of standard benchmarks. BIDS is a general approach that can be applied to other problems in Bayesian classification such as learning from imbalanced training data. BIDS may also be used to augment meta-learners such as boosting, bagging, and semi-supervised learning."; Paragraph 56, "Two variants of the MNB model which differ in how they treat word frequencies may be utilized by the present systems and methods. The first of these two variants of the MNB model may be called an Integer Multinomial (IMN) event model. In this model, let IDI denote the number of tokens, counted with multiplicity, in document D. Furthermore, let V={t.sub.1, ... , t.sub.lVI} be the vocabulary of category c. In the IMN model, a document D is represented as a vector D={x.sub.1, x.sub.2, ... , x.sub.lVI}, where x.sub.i is the number of occurrences of token t.sub.i in D. The generative model is as follows: each document D in class c is seen as being generated by picking IDI tokens independently, with probability of picking the token t.sub.i given by P(t.sub.ilc ). Therefore, the probability P(Dlc) of the document D arising from the class c under this generative model is given by the multinomial distribution as follows:") Deolalikar does not explicitly disclose the following, however, in analogous art of training materials, Bass teaches the following: electing at least one template from the one or more templates based on the second input; (Bass: Paragraph 61, "The idEa tool 38 is broadly divided into the guidelines 40 and templates 42, as shown in FIG. 2. The guidelines is a rich knowledge base based on the Instructional Systems Design (ISD) Model. The browser-based guidelines 40 provide the organization with principles, a tutorial, and guidelines for designing and developing instructionally sound training programs. Structurally the guidelines include content display, navigation means, a glossary, help including the tutorial, a notepad and bookmark tool, all deriving their content from a content database via a data processor. The idEa templates 42 are Java-based and allow users to complete analysis and design tasks and activities online. The templates 42 are either downloaded from the web server 28 or accessed through a browser using the Java Web Start plug-in so that the organization may input information to design their job training program. The templates 42--and their contents--are structured as objects so that course designers/developers and subject matter experts can reuse them. The templates 42 behave like wizards to guide the user in completing the template 42. A wizard is an interactive utility that guides a user through a process step by step. Templates are presented to users for their input of data specific to a task or activity. Pop-up windows appear at certain places to offer suggestions, tips, and the opportunity to seek help. Each template has a toolbar offering users several functions, e.g., file options, help function, etc. Users may save templates in a file, to their desktop, to their LAN, to disk, to export to HTML, etc. Users can reuse templates. A data processor 52 performs one or more of the following processes, depending on the particular task represented by the template, using a rule-based processing engine: (1) compiles the information; (2) weights the information based on a rule-based process; (3) calculates based on a rule-based process; and (4) filters/sorts the information based on a rule-based process. Once the processing is complete, the processor 52 outputs recommendations as process objects. The objects can be different forms depending on their intent and the type of business. The templates themselves are objects, as well as the fields and the information contained in the fields. Depending on the template and its purpose, the template references needed objects and displays them in a structured format, outputting desired information as well as allowing users to insert or change information, as shown below in FIGS. 14A-14D and FIGS. 15A-15I. It will be noted that users may begin with the guidelines 40 for advice and tutorial assistance, or they can go directly to the templates 42 to complete the work, accessing the guidelines 40 as needed through the Help function. The templates 42 correspond to the first two phases of the ISD process: (1) analysis; and (2) design."; Paragraph 70, "FIGS. 9A, 9B, 9C, 9D, 9E, 9F, 9G, 9H and 91 combine to form a flow chart of the analysis phase of the idEa templates 42. The analysis templates 60, in their aggregate, combine to perform a complete front end analysis. The analysis templates comprise nine different categories as follows: (1) Needs Assessment 62, with specific templates designated A00l through A009 in column 1 of FIG. 9B; (2) Needs Analysis 64, with specific templates designated A0lO through A019 in column 2 of FIG. 9B; (3) Education Analysis 66, with specific templates designated A020 through A026 in FIG. 9C; (4) Learning Analysis 68, with specific templates designated A030 through A038 in FIG. 9D; (5) Job Analysis 70, with specific templates designated A040 through A049 in FIG. 9E; (6) Task Analysis 72, with specific templates designated A050 through A058 in FIG. 9F; (7) Learner Analysis 74, with specific templates designated A060 through A064 in FIG. 9G; (8) Resource Analysis 76, with specific templates designated A070 through A076 in FIG. 9H; and (9) Existing Materials Analysis 78, with specific templates designated A080 through A088 in FIG. 91.") filling one or more fields of the at least one template using the identified content. (Bass: Paragraph 61, "The idEa tool 38 is broadly divided into the guidelines 40 and templates 42, as shown in FIG. 2. The guidelines is a rich knowledge base based on the Instructional Systems Design (ISD) Model. The browser-based guidelines 40 provide the organization with principles, a tutorial, and guidelines for designing and developing instructionally sound training programs. Structurally the guidelines include content display, navigation means, a glossary, help including the tutorial, a notepad and bookmark tool, all deriving their content from a content database via a data processor. The idEa templates 42 are Java­based and allow users to complete analysis and design tasks and activities online. The templates 42 are either downloaded from the web server 28 or accessed through a browser using the Java Web Start plug-in so that the organization may input information to design their job training program. The templates 42--and their contents--are structured as objects so that course designers/developers and subject matter experts can reuse them. The templates 42 behave like wizards to guide the user in completing the template 42. A wizard is an interactive utility that guides a user through a process step by step. Templates are presented to users for their input of data specific to a task or activity. Pop-up windows appear at certain places to offer suggestions, tips, and the opportunity to seek help. Each template has a toolbar offering users several functions, e.g., file options, help function, etc. Users may save templates in a file, to their desktop, to their LAN, to disk, to export to HTML, etc. Users can reuse templates. A data processor 52 performs one or more of the following processes, depending on the particular task represented by the template, using a rule-based processing engine: (1) compiles the information; (2) weights the information based on a rule-based process; (3) calculates based on a rule-based process; and (4) filters/sorts the information based on a rule-based process. Once the processing is complete, the processor 52 outputs recommendations as process objects. The objects can be different forms depending on their intent and the type of business. The templates themselves are objects, as well as the fields and the information contained in the fields. Depending on the template and its purpose, the template references needed objects and displays them in a structured format, outputting desired information as well as allowing users to insert or change information, as shown below in FIGS. 14A-14D and FIGS. 15A-15I. It will be noted that users may begin with the guidelines 40 for advice and tutorial assistance, or they can go directly to the templates 42 to complete the work, accessing the guidelines 40 as needed through the Help function. The templates 42 correspond to the first two phases of the ISD process: (1) analysis; and (2) design."; Paragraph 70, "FIGS. 9A, 9B, 9C, 9D, 9E, 9F, 9G, 9H and 91 combine to form a flow chart of the analysis phase of the idEa templates 42. The analysis templates 60, in their aggregate, combine to perform a complete front end analysis. The analysis templates comprise nine different categories as follows: (1) Needs Assessment 62, with specific templates designated A00l through A009 in column 1 of FIG. 9B; (2) Needs Analysis 64, with specific templates designated A0lO through A019 in column 2 of FIG. 9B; (3) Education Analysis 66, with specific templates designated A020 through A026 in FIG. 9C; (4) Learning Analysis 68, with specific templates designated A030 through A038 in FIG. 9D; (5) Job Analysis 70, with specific templates designated A040 through A049 in FIG. 9E; (6) Task Analysis 72, with specific templates designated A050 through A058 in FIG. 9F; (7) Learner Analysis 74, with specific templates designated A060 through A064 in FIG. 9G; (8) Resource Analysis 76, with specific templates designated A070 through A076 in FIG. 9H; and (9) Existing Materials Analysis 78, with specific templates designated A080 through A088 in FIG. 91.") Deolalikar discloses a method for extracting and classifying topics and content from documents for assembly into training materials. Bass discloses a method for creating templates and modules using available training materials. Marom discloses a method of using LLMs to identify similar documents using vectors to generate new documentation. At the time of Applicant's filed invention one of ordinary skill in the art would have deemed it obvious to combine the methods of Deolalikar with the teachings of Bass in order to improve the efficiency and ease of generation of training modules as disclosed by Bass (Bass: Paragraph 6: "However, none of these tools or applications provides a seamless, open, scalable and expandable environment for working and learning and which allows organizations to "plug-in" tools and applications that they have already invested in as well as to produce new tools and applications they will use in the future."). Claim(s) 8 and 16 – Deolalikar in view of Bass and Marom disclose the limitations of claims 1 and 9 Deolalikar further discloses the following: identifying an interrelation between the one or more documents, the one or more topics, and the one or more templates; and (Deolalikar: Paragraph 51, "The machine learning classifying device (100) may further comprise a training module (160). The training module (160) utilizes a number of documents to train a classification module (170) associated with the machine learning classifying device (100). These documents comprise, for example, the original documents stored in the original document database (142), the pseudo-documents stored in the pseudo-document database (152), or combinations of the original documents and the pseudo-documents. As will be described below in more detail, the training of the classification module ( 170) may be achieved by capturing characteristics of interest of the original documents' and the pseudo-documents' to underlying probability distribution, and identifying the probability distribution as training data. This training data may be utilized as instances of the possible relations between observed variables in documents to be characterized such as the above-described test documents."; Paragraph 73, "In another example, distribution of words may include a distribution of phrases, words, or a combination thereof. In this example, the distribution of words as described above may include a number of phrases. For example, if the machine learning classifying device (100) were learning to classify sports-related test documents using a number of sports-related original documents and sports-related pseudo-documents, the distribution of words derived from the sports-related original documents to derive the sports­related pseudo-documents may include the phrase "national football league" as a phrase, "quarterback" as a word, or a combination of both the phrase and the word. This example allows for the inclusion of phrases that provide additional context to the machine learning classifying device (100).") updating the database with one or more new documents, one or more new topics, and one or more new templates. (Deolalikar: Paragraph 22, "Of the many classifiers that have been proposed for the task of text classification, a "baseline" may be the naive Bayes probabilistic classifier. The naive Bayes classifier has several advantages that make it attractive for enterprise applications. It is easy to implement, and can be trained fast. But an aspect of naive Bayes that makes it attractive for enterprise applications is that it is transparent, and can be used for diagnostics. The user can easily understand the classifier, and can therefore troubleshoot it easily. In comparison, a solved model of a SVM is often hard to interpret. This difference is especially important in situations where the data is periodically being updated, the classes are still changing, or the system is still under construction."; Paragraph 76, "The number of pseudo-documents created (block 210) from a number of original documents may be between 10 and 40. In another example, the number of pseudo-documents created (block 210) from a number of original documents may be between 16 and 32. These pseudo­documents are used as examples or training data to teach the machine learning classifying device (100) how to classify textual documents. In one example, these pseudo-documents may be used by the classifying system or device alone or in combination with the original documents, to provide the training of the classifying system or device."; Paragraph 106, "The specification and figures describe generation of training documents for training a classifying device. The method may comprise, with a processor, sampling from a distribution of words in a number of original documents, and creating a number of pseudo-documents from the distribution of words, the pseudo-documents comprising a similar distribution of words as the original documents. The systems and methods of generating training documents may have a number of advantages. First, BIDS makes no assumptions about the model: it is truly non-parametric. This is in contrast with semi-supervised learning, where some assumptions about the underlying model must be made to match unlabelled data to it. Second, BIDS is conceptually simple and extremely easy to implement. The present implementation took 85 lines in PERL programming language. Although PERL is described here as being the program language used to write the present implementation, any other programming language may be used. Third, BIDS is fast. Specifically, BIDS adds O(.SIGMA .. sub.D.epsilon.TIDI) time to training the classifier, and nothing to testing or running the trained classifier. Other advantages are described herein.") Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Philip N Warner whose telephone number is (571)270-7407. The examiner can normally be reached Monday-Friday 7am-4:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jerry O’Connor can be reached at 571-272-6787. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Philip N Warner/Examiner, Art Unit 3624 /Jerry O'Connor/Supervisory Patent Examiner,Group Art Unit 3624
Read full office action

Prosecution Timeline

May 03, 2024
Application Filed
Jul 24, 2025
Non-Final Rejection — §103
Nov 21, 2025
Response Filed
Mar 06, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596974
MULTI-LAYER ABRASIVE TOOLS FOR CONCRETE SURFACE PROCESSING
2y 5m to grant Granted Apr 07, 2026
Patent 12596984
INFORMATION GENERATION APPARATUS, INFORMATION GENERATION METHOD AND PROGRAM
2y 5m to grant Granted Apr 07, 2026
Patent 12579490
GENERATING SUGGESTIONS WITHIN A DATA INTEGRATION SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12567011
BATTERY LEDGER MANAGEMENT SYSTEM AND METHOD OF BATTERY LEDGER MANAGEMENT
2y 5m to grant Granted Mar 03, 2026
Patent 12493819
UTILIZING MACHINE LEARNING MODELS TO GENERATE INITIATIVE PLANS
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
36%
Grant Probability
65%
With Interview (+28.6%)
3y 7m
Median Time to Grant
Moderate
PTA Risk
Based on 107 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month