Prosecution Insights
Last updated: April 19, 2026
Application No. 18/773,157

METHODS AND SYSTEMS FOR USE OF ARTIFICIAL INTELLIGENCE TO GENERATE METADATA TEMPLATES FOR CONTENT ITEMS IN AN ONLINE COLLABORATION ENVIRONMENT

Final Rejection §103§112
Filed
Jul 15, 2024
Examiner
DWIVEDI, MAHESH H
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Box Inc.
OA Round
2 (Final)
69%
Grant Probability
Favorable
3-4
OA Rounds
3y 6m
To Grant
74%
With Interview

Examiner Intelligence

Grants 69% — above average
69%
Career Allow Rate
521 granted / 751 resolved
+14.4% vs TC avg
Minimal +4% lift
Without
With
+4.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 6m
Avg Prosecution
21 currently pending
Career history
772
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
40.2%
+0.2% vs TC avg
§102
17.2%
-22.8% vs TC avg
§112
19.5%
-20.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 751 resolved cases

Office Action

§103 §112
DETAILED ACTION 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment 2. Receipt of Applicant’s Amendment filed on 01/29/2026 is acknowledged. The amendment includes the amending of claims 1, 4-5, 11-12, 15, and 18-19. Claim Rejections - 35 USC § 112 3. The rejections raised in the Office Action mailed on 10/29/2025 have been overcome by applicant’s amendment received on 01/29/2026. 4. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. 5. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. 6. Claim 8 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Specifically, it is unclear if the one or more reference documents are defining practices and procedure (See the limitation of “generate a natural language prompt for each determined document type based on the intent for each determined document type and one or more reference documents defining practices and procedure defining practices and procedure defining constraints on the natural language prompt”). For the purposes of examining the instant application, the examiner interprets the claimed one or more reference documents as defining practices and procedure as claimed in independent claims 1 and 15. Dependent claims 9-14 are rejected for incorporating the deficiencies of independent claim 8. Claim Rejections - 35 USC § 103 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. 8. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 9. Claims 1, 8, and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Puram et al. (U.S. PGPUB 2024/0020328), in view of Fernandez et al. (U.S. PGPUB 2025/0045516), and further in view of Gupta et al. (U.S. PGPUB 2024/0428783). 10. Regarding claims 1, 8, and 15, Puram teaches a method, system, and non-transitory, computer readable medium comprising: A) storing, by the cloud-based collaboration system, a set of one or more documents from a client system (Paragraphs 20, 23, and 27); B) wherein the documents comprise documents not previously present in storage of the cloud-based collaboration system and for which a document type has not yet been determined (Paragraphs 20, 23, and 27); C) pre-processing, by the cloud-based collaboration system, the set of one or more documents to determine one or more document types in the set of one or more documents and an intent for each determined document type (Paragraphs 20 and 37); E) generating, by the cloud-based collaboration system, a template associated with each document type (Paragraph 43). The examiner notes that Puram teaches “storing, by the cloud-based collaboration system, a set of one or more documents from a client system” as “Certain embodiments of the concepts, techniques, and structures disclosed herein are directed intelligent verification of documents. The verification of the documents is achieved by building and using multiple classification models to templatize the different types documents received from various sources, and verifying the documents based on data (or “target data”) extracted from the documents using the templates” (Paragraph 20), “hosting system 104 can be provided within a cloud computing environment, which may also be referred to as a cloud, cloud environment, cloud computing or cloud network. The cloud computing environment can provide the delivery of shared computing services (e.g., microservices) and/or resources to multiple users or tenants” (Paragraph 23), and “Repositories 124 can include various types of data repositories such as conventional file systems, cloud-based storage services such as SHAREFILE, BITBUCKET, DROPBOX, and MICROSOFT ONEDRIVE, and web servers that host files, documents, and other materials” (Paragraph 27). The examiner further notes that receiving documents in a cloud environment (which is collaborative) teaches the claimed storing. The examiner further notes that Puram teaches “wherein the documents comprise documents not previously present in storage of the cloud-based collaboration system and for which a document type has not yet been determined” as “Certain embodiments of the concepts, techniques, and structures disclosed herein are directed intelligent verification of documents. The verification of the documents is achieved by building and using multiple classification models to templatize the different types documents received from various sources, and verifying the documents based on data (or “target data”) extracted from the documents using the templates. The types of documents may include invoices, purchase orders, receipts, shipping orders, advance shipping notifications, export documents, and any other documents received by an organization. The sources (or “document sources”) may include suppliers, factories, original design manufacturers (ODMs), contract manufacturers, partners, vendors, financial institutions, government agencies, and other entity or organization that sends or otherwise provides a document to the organization. When a new document is received, the same classification models used in generating the templates can be used to determine a document type and a document source of the new document. Once the new document is classified into an appropriate document type and document source, a template which is appropriate for extracting target data from documents of the same document type and from the same document source as the new document can be determined (or “identified”) and used to extract the target data from the new document. The target data extracted from the new document can then be verified in order to verify the new document” (Paragraph 20), “hosting system 104 can be provided within a cloud computing environment, which may also be referred to as a cloud, cloud environment, cloud computing or cloud network. The cloud computing environment can provide the delivery of shared computing services (e.g., microservices) and/or resources to multiple users or tenants” (Paragraph 23), and “Repositories 124 can include various types of data repositories such as conventional file systems, cloud-based storage services such as SHAREFILE, BITBUCKET, DROPBOX, and MICROSOFT ONEDRIVE, and web servers that host files, documents, and other materials” (Paragraph 27). The examiner further notes that the receiving of new documents (i.e. the claimed documents not previously present) in a cloud environment (which is collaborative) includes determining a document type of such new documents (which entails that the document types have not been previously determined). The examiner further notes that Puram teaches “pre-processing, by the cloud-based collaboration system, the set of one or more documents to determine one or more document types in the set of one or more documents and an intent for each determined document type” as “Certain embodiments of the concepts, techniques, and structures disclosed herein are directed intelligent verification of documents. The verification of the documents is achieved by building and using multiple classification models to templatize the different types documents received from various sources, and verifying the documents based on data (or “target data”) extracted from the documents using the templates. The types of documents may include invoices, purchase orders, receipts, shipping orders, advance shipping notifications, export documents, and any other documents received by an organization. The sources (or “document sources”) may include suppliers, factories, original design manufacturers (ODMs), contract manufacturers, partners, vendors, financial institutions, government agencies, and other entity or organization that sends or otherwise provides a document to the organization. When a new document is received, the same classification models used in generating the templates can be used to determine a document type and a document source of the new document. Once the new document is classified into an appropriate document type and document source, a template which is appropriate for extracting target data from documents of the same document type and from the same document source as the new document can be determined (or “identified”) and used to extract the target data from the new document. The target data extracted from the new document can then be verified in order to verify the new document” (Paragraph 20) and “Document source classification engine 116 is operable to classify documents of a given document type into their document sources. A document source may correspond to a sender or provider of a document (e.g., a source from which a document was received). The organization has knowledge of the various sources of the different types of documents it receives. For example, invoices may be from source A, source B, source C, source D, source E, source F, and source G. As another example, purchase orders may be from source A, source F, source H, source I, source J, and source K. As still another example, shipping orders may be from source B, source D, source L, source M, and source N. In the above example, documents (e.g., the organization's historical documents) that are categorized as invoices may be classified as being from one of source A, source B, source C, source D, source E, source F, or source G. Similarly, documents that are categorized as purchase orders may be classified as being from one of source A, source F, source H, source I, source J, or source K, and documents that are categorized as shipping orders may be classified as being from one of source B, source D, source L, source M, or source N” (Paragraph 37). The examiner further notes that the classification of new documents includes determining a document type (i.e. the claimed one or more document types) and a document source (i.e. the undefined claimed intent in the broadest reasonable interpretation). The examiner further notes that Puram teaches “generating, by the cloud-based collaboration system, a template associated with each document type” as “templatization module 118 is operable to generate templates for extracting target data from the different types of documents received by the organization from the various document sources. As used herein, the term “target data” refers to data, such as the contents or information within a field(s) in a document, which is to be extracted from the document. For example, target data in an invoice may include an invoice number, a bill to address, and an invoice value. The target data in an invoice may also optionally include a sender of the invoice and/or an invoice date. As another example, target data in a receipt may include a receipt date, a price before taxes, a price after taxes, a customer name, and contact information. As still another example, target data in a shipping order may include an order number, a shipping address, a billing address, and item detail(s). The examples of target data are merely illustrative and may vary depending on the type of document” (Paragraph 43). The examiner further notes that the generation of templates for different document types teaches the claimed generating. Puram does not explicitly teach: D) generating, by the cloud-based collaboration system, a natural language prompt for each determined document type based on the intent for each determined document type and one or more reference documents defining practices and procedure defining constraints on the natural language prompt; E) generating, by the cloud-based collaboration system, from each natural language prompt a metadata template using a generative Artificial Intelligence (AI); and F) refining, by the cloud-based collaboration system, each generated metadata template. Fernandez, however, teaches “generating, by the cloud-based collaboration system, a natural language prompt for each determined document type based on the intent for each determined document type and one or more reference documents defining constraints on the natural language prompt” as “The request processing unit 122 receives a request for a template suggestion from the native application 114 or the web application 190. The template suggestion may include information identifying one or more templates and one or more template subsections from one or multiple templates. The request includes an indication identifying the application for which the template is being requested and textual content from an electronic content item for which the template is being requested. The textual content may include a title and/or headers extracted from the electronic content item, a sentence or sentences of text from the electronic content item, a paragraph or paragraphs from the electronic content item, a section or sections of the electronic content item, and/or other portion of the text from the electronic content item. In some implementations, other information, such as a position of a cursor or other indication of the portion of the electronic content item being worked on is included in the request received by the request processing unit 122 and is provided to the prompt construction unit 124 to provide the language model 138 with information indicative of the portion of the electronic content item on which the user is currently working. This information can be used, in part, by the language model 138 to determine an appropriate subsection of a template to be recommended to the user. In a non-limiting example, a first subsection of a template may be relevant, if the user is working on a content of a first type within the electronic content item while a second subsection of a template may be more relevant, if the user is working on a content of a second type within the electronic content item. The prompt construction unit 124 may also include additional contextual information about the user in some implementations. The prompt construction unit 124 may reformat or otherwise standardize the information to be included in the prompt to a standardized format that is recognized by the language model 138” (Paragraph 28) and “the prompt construction unit 124 has access to user context information, such as information indicative of how the user has utilized templates in the past and which templates the user utilized, the type of content that the user typically creates, and/or such information that may be included in the prompt to further refine the predictions that a particular template subsection would be relevant to the user” (Paragraph 31), “generating, by the cloud-based collaboration system, from each natural language prompt a metadata template using a generative Artificial Intelligence (AI)” as “Systems and methods for providing real-time artificial intelligence (AI) powered selection of template sections for adaptive content creation are described herein” (Paragraph 18), “The request processing unit 122 receives a request for a template suggestion from the native application 114 or the web application 190. The template suggestion may include information identifying one or more templates and one or more template subsections from one or multiple templates. The request includes an indication identifying the application for which the template is being requested and textual content from an electronic content item for which the template is being requested. The textual content may include a title and/or headers extracted from the electronic content item, a sentence or sentences of text from the electronic content item, a paragraph or paragraphs from the electronic content item, a section or sections of the electronic content item, and/or other portion of the text from the electronic content item. In some implementations, other information, such as a position of a cursor or other indication of the portion of the electronic content item being worked on is included in the request received by the request processing unit 122 and is provided to the prompt construction unit 124 to provide the language model 138 with information indicative of the portion of the electronic content item on which the user is currently working. This information can be used, in part, by the language model 138 to determine an appropriate subsection of a template to be recommended to the user. In a non-limiting example, a first subsection of a template may be relevant, if the user is working on a content of a first type within the electronic content item while a second subsection of a template may be more relevant, if the user is working on a content of a second type within the electronic content item. The prompt construction unit 124 may also include additional contextual information about the user in some implementations. The prompt construction unit 124 may reformat or otherwise standardize the information to be included in the prompt to a standardized format that is recognized by the language model 138” (Paragraph 28), “a template is dynamically constructed to assist the user in creating an electronic content item, the template subsections may be selected from multiple templates from the template datastore 136 that have been determined to be relevant” (Paragraph 29), “The generation of the template suggestions is an iterative process that continues as the user creates or modifies the electronic content items. This approach enables the system to dynamically construct a custom template from among the available templates and template subsections” (Paragraph 30), and “The template processing unit 134 can also include additional information associated with the templates in the template datastore 138. One example of such information is metadata associated with the source template files. The metadata may provide useful information about the contents of the templates and/or the subsections of the templates included therein. The template categorization unit 402 can also use the metadata when categorizing the template to determine an appropriate category for the template. The template segmentation unit 406 may also utilize such metadata to determine the types of section headers or other similar elements that are typically expected to be found in such source templates” (Paragraph 58), and “refining, by the cloud-based collaboration system, each generated metadata template” as “The generation of the template suggestions is an iterative process that continues as the user creates or modifies the electronic content items. This approach enables the system to dynamically construct a custom template from among the available templates and template subsections” (Paragraph 30) and “The application services platform 110 may also make copies of the template that were used to create the new template and store those in the template datastore 136 so that the new template is not inadvertently modified if the source templates for one or more of the template subsections is modified” (Paragraph 50). The examiner further notes that the secondary reference of Fernandez teaches the concept of generating a prompt based off of various aspects of document data (including specific content of such a document (which teaches the claimed document type in the broadest reasonable interpretation)). Such a generated prompt is also standardized and/or reformatted (which entails the use of the undefined claimed reference documents in the broadest reasonable interpretation). Moreover, although Puram teaches the generation of templates according to different document types, there is no explicit teaching of generating metadata templates via the use of AI based on a generated prompt. Nevertheless, Fernandez teaches the generation of templates with metadata (i.e. the claimed undefined metadata templates in the broadest reasonable interpretation) based off of prompts via the use of AI that can be modified (i.e. refined). The combination would result in the generation of metadata templates for the document types of Puram. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Fernandez’s would have allowed Puram’s to provide a method for improving the suggestion and adaption of templates, as noted by Fernandez (Paragraph 1). Puram and Fernandez do not explicitly teach: D) one or more reference documents defining practices and procedure defining constraints on the natural language prompt. Gupta, however, teaches “one or more reference documents defining practices and procedure defining constraints on the natural language prompt” as “The belief augmentation component 155 may receive the moderated content category data 230 and the one or more protected class indications 235. The belief augmentation component 155 may determine one or more policies 305 corresponding to the moderated content category data 230 and the one or more protected class indications 235. A policy 305 may be a set of guidelines for the corresponding moderated content category data 230 and the protected class indications 235. For example, a policy 305 corresponding to “Violence” or “Harmful Acts” may include guidelines that causing harm to others is wrong and people should be treated with kindness. The policy 305 may be text data or other natural language representation data, or other types of data. The belief augmentation component 155 may request policy data from a database, such as policy storage 325. For example, the moderated content category data 230 (e.g., “Hate and Intolerance”) may correspond to a policy 305a for “Hate and Intolerance” and protected class indication 235 (e.g., “[group of people]”) may correspond to a policy 305b for “[group of people]”. The policies 305 may be a set of guidelines to mitigate bias and to avoid an moderated content output (e.g., biased output, harmful output, illegal content, etc.) by the language model 165” (Paragraph 66) and “The prompt generation component 160 may receive, from the belief augmentation component 155, the one or more policies 305 and the policy template 315, as well as the user input data 205. The prompt generation component 160 may populate the parameters of the policy template 315 based on the received data (e.g., policies 305, terms from the user input data 205, user profile information, etc.). The prompt generation component 160 may append the populated policy template 315 (either before or after) to the user input data 205 to generate the augmented prompt data 320. In some embodiments the augmented prompt data 320 may be text (or other type of data) representing natural language input. In some embodiments, the augmented prompt data 320 may be text formatted based on the prompt configuration requirements of the language model 165” (Paragraph 72). The examiner further notes that although Fernandez standardizes and reformats prompts to generate an augmented prompt, there is no explicit teaching of the use of practices & procedure as a basis to generate such augmented prompts. Nevertheless, Gupta teaches the use of applied policies (that include guidelines (which entails the use of practices and procedure)) defining constraints on prompts to generate augmented prompts. The combination would result in the use of such practices and procedure when generating the augmented prompts of Fernandez. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Gupta’s would have allowed Puram’s and Fernandez’s to provide a method for achieving statistical parity in the outputs from an LLM, as noted by Gupta (Paragraph 66). 11. Claims 2, 9, and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Puram et al. (U.S. PGPUB 2024/0020328), in view of Fernandez et al. (U.S. PGPUB 2025/0045516), and further in view of Gupta et al. (U.S. PGPUB 2024/0428783) as applied to claims 1, 8, and 15 above, and further in view of Gadiya et al. (U.S. PGPUB 2022/0156456). 12. Regarding claims 2, 9, and 16, Puram, Fernandez, and Gupta do not explicitly teach a method, system, and non-transitory, computer readable medium comprising: A) prior to generating the natural language prompt for each determined document type, determining, by the cloud-based collaboration system, whether a predefined template is available for one or more of the determined document types; and B) in response to determining a predefined template is available for at least one of the determined document types, presenting, by the cloud-based collaboration system, the predefined template to a user through a user interface. Gadiya, however, teaches “prior to generating the natural language prompt for each determined document type, determining, by the cloud-based collaboration system, whether a predefined template is available for one or more of the determined document types” as “At 1006, CMS 100 (e.g., via content management module 308) determines, based on data in the user account, that the first document includes first content that matches second content of a second document, the second document subsequently modified to include a second set of overlaid fillable fields. At 1008, CMS 1000 (e.g., via content management module 308) determines that a first set of field types corresponding to the first set of overlaid fillable fields matches a second set of field types corresponding to the second set of overlaid fillable fields. At 1010, CMS 100, responsive to determining that the first set of field types matches the second set of field types, generates for display (e.g., using user interface module 302) a prompt to generate a template for the first document. In some embodiments, the CMS may transmit the prompt to client device 120. Client device 120 may use client application 200 to generate for display the prompt using, for example, display 210. In some embodiments, when a new document is received by the CMS or when a user selects a document for generating a fillable form, the CMS may determine that the same or similar document was previously processed, and a template has been generated for the document. Based on the determination, the CMS may recommend to the user to use the existing template. For example, the CMS may determine that the document was previously processed by using any methods described above. That is, as discussed above, the CMS may hash the documents to determine that the documents have the same content, input the documents into a neural network, compare the textual data of the documents, or determine that two documents are the same using another method. Thus, it should be emphasized that in some embodiments the CMS may determine that the documents are the same (“matching”) even when they are not identical, that is, even when their hashes don't match. For example, the CMS may determine that two documents corresponding to two different photos or scans of the same physical document are matching even though the photos or scans may not be identical” (Paragraphs 96-97) and “in response to determining a predefined template is available for at least one of the determined document types, presenting, by the cloud-based collaboration system, the predefined template to a user through a user interface” as “At 1006, CMS 100 (e.g., via content management module 308) determines, based on data in the user account, that the first document includes first content that matches second content of a second document, the second document subsequently modified to include a second set of overlaid fillable fields. At 1008, CMS 1000 (e.g., via content management module 308) determines that a first set of field types corresponding to the first set of overlaid fillable fields matches a second set of field types corresponding to the second set of overlaid fillable fields. At 1010, CMS 100, responsive to determining that the first set of field types matches the second set of field types, generates for display (e.g., using user interface module 302) a prompt to generate a template for the first document. In some embodiments, the CMS may transmit the prompt to client device 120. Client device 120 may use client application 200 to generate for display the prompt using, for example, display 210. In some embodiments, when a new document is received by the CMS or when a user selects a document for generating a fillable form, the CMS may determine that the same or similar document was previously processed, and a template has been generated for the document. Based on the determination, the CMS may recommend to the user to use the existing template. For example, the CMS may determine that the document was previously processed by using any methods described above. That is, as discussed above, the CMS may hash the documents to determine that the documents have the same content, input the documents into a neural network, compare the textual data of the documents, or determine that two documents are the same using another method. Thus, it should be emphasized that in some embodiments the CMS may determine that the documents are the same (“matching”) even when they are not identical, that is, even when their hashes don't match. For example, the CMS may determine that two documents corresponding to two different photos or scans of the same physical document are matching even though the photos or scans may not be identical” (Paragraphs 96-97). The examiner further notes that the secondary reference of Gadiya teaches the concept of displaying pre-existing templates before generating a prompt to a user. The combination would result in displaying preexisting templates in Puram and Fernandez before generating a prompt. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Gadiya’s would have allowed Puram’s, Fernandez’s, and Gupta’s to provide a method using existing templates for uploaded documents, as noted by Gadiya (Paragraph 97). 13. Claims 4, 11, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Puram et al. (U.S. PGPUB 2024/0020328), in view of Fernandez et al. (U.S. PGPUB 2025/0045516), and further in view of Gupta et al. (U.S. PGPUB 2024/0428783) as applied to claims 1, 8, and 15 above, and further in view of Kang et al. (U.S. PGPUB 2021/0365640). 14. Regarding claims 4, 11, and 18, Puram further teaches a method, system, and non-transitory, computer readable medium comprising: A) wherein pre-processing the set of one or more documents further comprises: applying an AI analysis to the set of one or more documents (Paragraph 20); B) identifying the one or more document types in the set of one or more documents based on the AI analysis (Paragraph 20). The examiner notes that Puram teaches “wherein pre-processing the set of one or more documents further comprises: applying an AI analysis to the set of one or more documents” as “Certain embodiments of the concepts, techniques, and structures disclosed herein are directed intelligent verification of documents. The verification of the documents is achieved by building and using multiple classification models to templatize the different types documents received from various sources, and verifying the documents based on data (or “target data”) extracted from the documents using the templates. The types of documents may include invoices, purchase orders, receipts, shipping orders, advance shipping notifications, export documents, and any other documents received by an organization. The sources (or “document sources”) may include suppliers, factories, original design manufacturers (ODMs), contract manufacturers, partners, vendors, financial institutions, government agencies, and other entity or organization that sends or otherwise provides a document to the organization. When a new document is received, the same classification models used in generating the templates can be used to determine a document type and a document source of the new document. Once the new document is classified into an appropriate document type and document source, a template which is appropriate for extracting target data from documents of the same document type and from the same document source as the new document can be determined (or “identified”) and used to extract the target data from the new document. The target data extracted from the new document can then be verified in order to verify the new document” (Paragraph 20). The examiner further notes that the use of a classification model (i.e. the claimed undefined general AI) to determine a document type of a newly uploaded document teaches the claimed applying. The examiner further notes that Puram teaches “identifying the one or more document types in the set of one or more documents based on the AI analysis” as “Certain embodiments of the concepts, techniques, and structures disclosed herein are directed intelligent verification of documents. The verification of the documents is achieved by building and using multiple classification models to templatize the different types documents received from various sources, and verifying the documents based on data (or “target data”) extracted from the documents using the templates. The types of documents may include invoices, purchase orders, receipts, shipping orders, advance shipping notifications, export documents, and any other documents received by an organization. The sources (or “document sources”) may include suppliers, factories, original design manufacturers (ODMs), contract manufacturers, partners, vendors, financial institutions, government agencies, and other entity or organization that sends or otherwise provides a document to the organization. When a new document is received, the same classification models used in generating the templates can be used to determine a document type and a document source of the new document. Once the new document is classified into an appropriate document type and document source, a template which is appropriate for extracting target data from documents of the same document type and from the same document source as the new document can be determined (or “identified”) and used to extract the target data from the new document. The target data extracted from the new document can then be verified in order to verify the new document” (Paragraph 20). The examiner further notes that the determination of a document type of newly uploaded documents via a classification model (i.e. the claimed undefined general AI) teaches the claimed identifying. Puram, Fernandez, and Gupta do not explicitly teach: C) presenting, through a user interface to a user, each identified document type in the set of one or more documents along with a request for the intent for each determined document type; and D) receiving, through the user interface from the user, the intent for each determined document type. Kang, however, teaches “presenting, through a user interface to a user, each identified document type in the set of one or more documents along with a request for the intent for each determined document type” as “a document type list 351 of a document type feedback region 350 may display a document type that a document classification model may generate as prediction results. To be specific, each document type ‘interrogative sentence’ and ‘declarative sentence’ displayed in the document type list 351 may be a document type trained through pre-training for the document classification model. In addition, the number displayed to the right of each document type in the document type list 351 indicates the number of documents classified into each document type among all documents included in a document set and through this, the user may recognize the frequency of each document type in the document set” (Paragraph 61) and “an intention list 361 of an intention analysis feedback region 360 may display intention items that the intention analysis model can generate as prediction results. To be specific, each intention item “health”, “fire”, “vehicle”, and “contact” displayed in the intention list 361 may be an intention item trained through pre-training for the intention analysis model. In addition, the number displayed to the right of each intention item in the intention list 361 indicates the number of documents classified as the intention item among all documents included in a document set, through this, the user can see the frequency of each intention item in the document set” (Paragraph 65), and “the user may select one of other intention items indicated by an empty circle in the intention list 361 to change the intention of a document subjected to analysis. For example, the user may select ‘fire’ from the intention list 361 to change the intention of ‘document 2’ to ‘fire’ from ‘health’” (Paragraph 67) and “receiving, through the user interface from the user, the intent for each determined document type” as “a document type list 351 of a document type feedback region 350 may display a document type that a document classification model may generate as prediction results. To be specific, each document type ‘interrogative sentence’ and ‘declarative sentence’ displayed in the document type list 351 may be a document type trained through pre-training for the document classification model. In addition, the number displayed to the right of each document type in the document type list 351 indicates the number of documents classified into each document type among all documents included in a document set and through this, the user may recognize the frequency of each document type in the document set” (Paragraph 61) and “an intention list 361 of an intention analysis feedback region 360 may display intention items that the intention analysis model can generate as prediction results. To be specific, each intention item “health”, “fire”, “vehicle”, and “contact” displayed in the intention list 361 may be an intention item trained through pre-training for the intention analysis model. In addition, the number displayed to the right of each intention item in the intention list 361 indicates the number of documents classified as the intention item among all documents included in a document set, through this, the user can see the frequency of each intention item in the document set” (Paragraph 65), and “the user may select one of other intention items indicated by an empty circle in the intention list 361 to change the intention of a document subjected to analysis. For example, the user may select ‘fire’ from the intention list 361 to change the intention of ‘document 2’ to ‘fire’ from ‘health’” (Paragraph 67). The examiner further notes that the secondary reference of Kang teaches the concept of displaying determined document types and a “request” for intent clarification from a user via the interface element 360. The combination would result in the user of Puram being able to view document types and clarify intent for such document types. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Kang’s would have allowed Puram’s, Fernandez’s, and Gupta’s to provide a method for changing intentions via a user input, as noted by Kang (Paragraph 68). 15. Claims 5 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Puram et al. (U.S. PGPUB 2024/0020328), in view of Fernandez et al. (U.S. PGPUB 2025/0045516), and further in view of Gupta et al. (U.S. PGPUB 2024/0428783) as applied to claims 1, 8, and 15 above, and further in view of Bouguerra et al. (U.S. PGPUB 2024/0354491). 16. Regarding claims 5 and 12, Puram does not explicitly teach a method and system comprising: A) wherein generating the natural language prompt for each determined document type comprises: reading the one or more reference documents defining constraints on the natural language prompt for each document type; B) applying a Large Language Model (LLM) AI to the intent for each determined document type, and the one or more reference documents defining constraints on the natural language prompt for each document type. Fernandez, however, teaches “wherein generating the natural language prompt for each determined document type comprises: reading the one or more reference documents defining constraints on the natural language prompt for each document type” as “The request processing unit 122 receives a request for a template suggestion from the native application 114 or the web application 190. The template suggestion may include information identifying one or more templates and one or more template subsections from one or multiple templates. The request includes an indication identifying the application for which the template is being requested and textual content from an electronic content item for which the template is being requested. The textual content may include a title and/or headers extracted from the electronic content item, a sentence or sentences of text from the electronic content item, a paragraph or paragraphs from the electronic content item, a section or sections of the electronic content item, and/or other portion of the text from the electronic content item. In some implementations, other information, such as a position of a cursor or other indication of the portion of the electronic content item being worked on is included in the request received by the request processing unit 122 and is provided to the prompt construction unit 124 to provide the language model 138 with information indicative of the portion of the electronic content item on which the user is currently working. This information can be used, in part, by the language model 138 to determine an appropriate subsection of a template to be recommended to the user. In a non-limiting example, a first subsection of a template may be relevant, if the user is working on a content of a first type within the electronic content item while a second subsection of a template may be more relevant, if the user is working on a content of a second type within the electronic content item. The prompt construction unit 124 may also include additional contextual information about the user in some implementations. The prompt construction unit 124 may reformat or otherwise standardize the information to be included in the prompt to a standardized format that is recognized by the language model 138” (Paragraph 28) and “the prompt construction unit 124 has access to user context information, such as information indicative of how the user has utilized templates in the past and which templates the user utilized, the type of content that the user typically creates, and/or such information that may be included in the prompt to further refine the predictions that a particular template subsection would be relevant to the user” (Paragraph 31) and “applying a Large Language Model (LLM) AI to the intent for each determined document type, and the one or more reference documents defining constraints on the natural language prompt for each document type” as “The request processing unit 122 receives a request for a template suggestion from the native application 114 or the web application 190. The template suggestion may include information identifying one or more templates and one or more template subsections from one or multiple templates. The request includes an indication identifying the application for which the template is being requested and textual content from an electronic content item for which the template is being requested. The textual content may include a title and/or headers extracted from the electronic content item, a sentence or sentences of text from the electronic content item, a paragraph or paragraphs from the electronic content item, a section or sections of the electronic content item, and/or other portion of the text from the electronic content item. In some implementations, other information, such as a position of a cursor or other indication of the portion of the electronic content item being worked on is included in the request received by the request processing unit 122 and is provided to the prompt construction unit 124 to provide the language model 138 with information indicative of the portion of the electronic content item on which the user is currently working. This information can be used, in part, by the language model 138 to determine an appropriate subsection of a template to be recommended to the user. In a non-limiting example, a first subsection of a template may be relevant, if the user is working on a content of a first type within the electronic content item while a second subsection of a template may be more relevant, if the user is working on a content of a second type within the electronic content item. The prompt construction unit 124 may also include additional contextual information about the user in some implementations. The prompt construction unit 124 may reformat or otherwise standardize the information to be included in the prompt to a standardized format that is recognized by the language model 138” (Paragraph 28), “the prompt construction unit 124 has access to user context information, such as information indicative of how the user has utilized templates in the past and which templates the user utilized, the type of content that the user typically creates, and/or such information that may be included in the prompt to further refine the predictions that a particular template subsection would be relevant to the user” (Paragraph 31), and “The prompt construction unit 124 provides the prompt to the request processing unit 122 for processing. The request processing unit 122 then provides the prompt to the language model 138 as an input. The language model 138 analyzes the prompt and outputs a template suggestion” (Paragraph 32). The examiner further notes that the secondary reference of Fernandez teaches the concept of generating a prompt based off of various aspects of document data (including specific content of such a document (which teaches the claimed document type in the broadest reasonable interpretation)). Such a generated prompt is also standardized and/or reformatted (which entails the use of the undefined claimed reference documents in the broadest reasonable interpretation) and subsequently fed into an LLM (i.e. applying an LLM to a generated prompt). The combination would result in using an LLM to process the discerned document types of Puram. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Fernandez’s would have allowed Puram’s to provide a method for improving the suggestion and adaption of templates, as noted by Fernandez (Paragraph 1). Puram and Fernandez do not explicitly teach: B) applying a Large Language Model (LLM) AI to generate the natural language prompt; and C) presenting the generated natural language prompt to a user through a user interface. Bouguerra, however, teaches “applying a Large Language Model (LLM) AI to generate the natural language prompt” as “the structured module 308 determines, utilizing an LLM, an intent of a user from the user input message in the structured prompt window 710, and displays the LLM generated prompt message in the structured prompt window 710 such as (e.g., Would you like a notification when your package arrives?)” (Paragraph 68) and “presenting the generated natural language prompt to a user through a user interface” as “the structured module 308 determines, utilizing an LLM, an intent of a user from the user input message in the structured prompt window 710, and displays the LLM generated prompt message in the structured prompt window 710 such as (e.g., Would you like a notification when your package arrives?)” (Paragraph 68). The examiner further notes that although Fernandez and Gupta generate prompts, there is no explicit teaching of an LLM generating such a prompt (rather a prompt construction unit generates the prompt in Fernandez and prompt generation component generates the prompt in Gupta). Nevertheless, the secondary reference of Bouguerra teaches the concept of an LLM generating a prompt that is displayed to a user. The combination would result in the prompt construction unit of Fernandez and the prompt generation unit of Gupta to be part of an LLM. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Bouguerra’s would have allowed Puram’s, Fernandez’s, and Gupta’s to provide a method for quickly and efficiently understand document text , as noted by Bouguerra (Paragraph 3). 17. Claims 6, 13, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Puram et al. (U.S. PGPUB 2024/0020328), in view of Fernandez et al. (U.S. PGPUB 2025/0045516), and further in view of Gupta et al. (U.S. PGPUB 2024/0428783) as applied to claims 1, 8, and 15 above, and further in view of Bouguerra et al. (U.S. PGPUB 2024/0354491) as applied to claims 5 and 12 above, and further in view of Cappetta (U.S. PGPUB 2025/0094132). 18. Regarding claims 6 and 13, Puram, Fernandez, Gupta, and Bouguerra do not explicitly teach a method and system comprising: A) wherein generating the natural language prompt for each determined document type further comprises: determining whether to modify the generated natural language prompt; and B) in response to determining to modify the generated natural language prompt, receiving input indicating a modification to the generated natural language prompt and updating the generated natural language prompt based on the received input. Cappetta, however, teaches “wherein generating the natural language prompt for each determined document type further comprises: determining whether to modify the generated natural language prompt” as “during execution of process 200, feedback may be provided to the user via graphical user interface 150 or an audio interface between the user and server application 112. Thus, the user may monitor each subprocess in process 200, for example, to review the prompts that are generated, the outputs of the generative AI model, and/or the like. The user may also be provided with one or more inputs to pause or otherwise disrupt process 200, modify the generated prompts, modify the outputs of the generative AI model, and/or the like” (Paragraph 84) and “in response to determining to modify the generated natural language prompt, receiving input indicating a modification to the generated natural language prompt and updating the generated natural language prompt based on the received input” as “during execution of process 200, feedback may be provided to the user via graphical user interface 150 or an audio interface between the user and server application 112. Thus, the user may monitor each subprocess in process 200, for example, to review the prompts that are generated, the outputs of the generative AI model, and/or the like. The user may also be provided with one or more inputs to pause or otherwise disrupt process 200, modify the generated prompts, modify the outputs of the generative AI model, and/or the like” (Paragraph 84). The examiner further notes that the secondary reference of Cappetta teaches the concept of a user modifying (which entails a user determining to modify a generated prompt) a generated prompt (which results in that prompt being updated). The combination would result in allowing a user to modify the generated prompts of Fernandez, Gupta, and Bouguerra. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Cappetta’s would have allowed Puram’s, Fernandez’s, Gupta’s, and Bouguerra’s to provide a method for a user to review generated prompts, as noted by Cappetta (Paragraph 84). Regarding claim 19, Puram does not explicitly teach a non-transitory, computer readable medium comprising: A) wherein generating the natural language prompt for each determined document type comprises: reading the one or more reference documents defining constraints on the natural language prompt for each document type; B) applying a Large Language Model (LLM) AI to the intent for each determined document type, and the one or more reference documents defining constraints on the natural language prompt for each document type. Fernandez, however, teaches “wherein generating the natural language prompt for each determined document type comprises: reading the one or more reference documents defining constraints on the natural language prompt for each document type” as “The request processing unit 122 receives a request for a template suggestion from the native application 114 or the web application 190. The template suggestion may include information identifying one or more templates and one or more template subsections from one or multiple templates. The request includes an indication identifying the application for which the template is being requested and textual content from an electronic content item for which the template is being requested. The textual content may include a title and/or headers extracted from the electronic content item, a sentence or sentences of text from the electronic content item, a paragraph or paragraphs from the electronic content item, a section or sections of the electronic content item, and/or other portion of the text from the electronic content item. In some implementations, other information, such as a position of a cursor or other indication of the portion of the electronic content item being worked on is included in the request received by the request processing unit 122 and is provided to the prompt construction unit 124 to provide the language model 138 with information indicative of the portion of the electronic content item on which the user is currently working. This information can be used, in part, by the language model 138 to determine an appropriate subsection of a template to be recommended to the user. In a non-limiting example, a first subsection of a template may be relevant, if the user is working on a content of a first type within the electronic content item while a second subsection of a template may be more relevant, if the user is working on a content of a second type within the electronic content item. The prompt construction unit 124 may also include additional contextual information about the user in some implementations. The prompt construction unit 124 may reformat or otherwise standardize the information to be included in the prompt to a standardized format that is recognized by the language model 138” (Paragraph 28) and “the prompt construction unit 124 has access to user context information, such as information indicative of how the user has utilized templates in the past and which templates the user utilized, the type of content that the user typically creates, and/or such information that may be included in the prompt to further refine the predictions that a particular template subsection would be relevant to the user” (Paragraph 31) and “applying a Large Language Model (LLM) AI to the intent for each determined document type, and the one or more reference documents defining constraints on the natural language prompt for each document type” as “The request processing unit 122 receives a request for a template suggestion from the native application 114 or the web application 190. The template suggestion may include information identifying one or more templates and one or more template subsections from one or multiple templates. The request includes an indication identifying the application for which the template is being requested and textual content from an electronic content item for which the template is being requested. The textual content may include a title and/or headers extracted from the electronic content item, a sentence or sentences of text from the electronic content item, a paragraph or paragraphs from the electronic content item, a section or sections of the electronic content item, and/or other portion of the text from the electronic content item. In some implementations, other information, such as a position of a cursor or other indication of the portion of the electronic content item being worked on is included in the request received by the request processing unit 122 and is provided to the prompt construction unit 124 to provide the language model 138 with information indicative of the portion of the electronic content item on which the user is currently working. This information can be used, in part, by the language model 138 to determine an appropriate subsection of a template to be recommended to the user. In a non-limiting example, a first subsection of a template may be relevant, if the user is working on a content of a first type within the electronic content item while a second subsection of a template may be more relevant, if the user is working on a content of a second type within the electronic content item. The prompt construction unit 124 may also include additional contextual information about the user in some implementations. The prompt construction unit 124 may reformat or otherwise standardize the information to be included in the prompt to a standardized format that is recognized by the language model 138” (Paragraph 28), “the prompt construction unit 124 has access to user context information, such as information indicative of how the user has utilized templates in the past and which templates the user utilized, the type of content that the user typically creates, and/or such information that may be included in the prompt to further refine the predictions that a particular template subsection would be relevant to the user” (Paragraph 31), and “The prompt construction unit 124 provides the prompt to the request processing unit 122 for processing. The request processing unit 122 then provides the prompt to the language model 138 as an input. The language model 138 analyzes the prompt and outputs a template suggestion” (Paragraph 32). The examiner further notes that the secondary reference of Fernandez teaches the concept of generating a prompt based off of various aspects of document data (including specific content of such a document (which teaches the claimed document type in the broadest reasonable interpretation)). Such a generated prompt is also standardized and/or reformatted (which entails the use of the undefined claimed reference documents in the broadest reasonable interpretation) and subsequently fed into an LLM (i.e. applying an LLM to a generated prompt). The combination would result in using an LLM to process the discerned document types of Puram. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Fernandez’s would have allowed Puram’s to provide a method for improving the suggestion and adaption of templates, as noted by Fernandez (Paragraph 1). Puram, Fernandez, and Gupta do not explicitly teach: B) applying a Large Language Model (LLM) AI to generate the natural language prompt; and C) presenting the generated natural language prompt to a user through a user interface. Bouguerra, however, teaches “applying a Large Language Model (LLM) AI to generate the natural language prompt” as “the structured module 308 determines, utilizing an LLM, an intent of a user from the user input message in the structured prompt window 710, and displays the LLM generated prompt message in the structured prompt window 710 such as (e.g., Would you like a notification when your package arrives?)” (Paragraph 68) and “presenting the generated natural language prompt to a user through a user interface” as “the structured module 308 determines, utilizing an LLM, an intent of a user from the user input message in the structured prompt window 710, and displays the LLM generated prompt message in the structured prompt window 710 such as (e.g., Would you like a notification when your package arrives?)” (Paragraph 68). The examiner further notes that although Fernandez and Gupta generate prompts, there is no explicit teaching of an LLM generating such a prompt (rather a prompt construction unit generates the prompt in Fernandez and prompt generation component generates the prompt in Gupta). Nevertheless, the secondary reference of Bouguerra teaches the concept of an LLM generating a prompt that is displayed to a user. The combination would result in the prompt construction unit of Fernandez and the prompt generation unit of Gupta to be part of an LLM. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Bouguerra’s would have allowed Puram’s and Fernandez’s to provide a method for quickly and efficiently understand document text , as noted by Bouguerra (Paragraph 3). Puram, Fernandez, Gupta, and Bouguerra do not explicitly teach: D) determining whether to modify the generated natural language prompt; and E) in response to determining to modify the generated natural language prompt, receiving input indicating a modification to the generated natural language prompt and updating the generated natural language prompt based on the received input. Cappetta, however, teaches “determining whether to modify the generated natural language prompt” as “during execution of process 200, feedback may be provided to the user via graphical user interface 150 or an audio interface between the user and server application 112. Thus, the user may monitor each subprocess in process 200, for example, to review the prompts that are generated, the outputs of the generative AI model, and/or the like. The user may also be provided with one or more inputs to pause or otherwise disrupt process 200, modify the generated prompts, modify the outputs of the generative AI model, and/or the like” (Paragraph 84) and “in response to determining to modify the generated natural language prompt, receiving input indicating a modification to the generated natural language prompt and updating the generated natural language prompt based on the received input” as “during execution of process 200, feedback may be provided to the user via graphical user interface 150 or an audio interface between the user and server application 112. Thus, the user may monitor each subprocess in process 200, for example, to review the prompts that are generated, the outputs of the generative AI model, and/or the like. The user may also be provided with one or more inputs to pause or otherwise disrupt process 200, modify the generated prompts, modify the outputs of the generative AI model, and/or the like” (Paragraph 84). The examiner further notes that the secondary reference of Cappetta teaches the concept of a user modifying (which entails a user determining to modify a generated prompt) a generated prompt (which results in that prompt being updated). The combination would result in allowing a user to modify the generated prompts of Fernandez, Gupta, and Bouguerra. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Cappetta’s would have allowed Puram’s, Fernandez’s, Gupta’s, and Bouguerra’s to provide a method for a user to review generated prompts, as noted by Cappetta (Paragraph 84). 19. Claims 7, 14, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Puram et al. (U.S. PGPUB 2024/0020328), in view of Fernandez et al. (U.S. PGPUB 2025/0045516), and further in view of Gupta et al. (U.S. PGPUB 2024/0428783) as applied to claims 1, 8, and 15 above, and further in view of Turner (U.S. PGPUB 2025/0238605). 20. Regarding claims 7, 14, and 20, Puram, Fernandez, and Gupta do not explicitly teach a method, system, and non-transitory, computer readable medium comprising: A) wherein refining each generated metadata template comprises: presenting the generated metadata template through a user interface; B) receiving, through the user interface, an input indicating a user feedback related to the generated metadata template; C) determining, based on the received input, whether to change the generated metadata template; and D) in response to determining to change the generated metadata template, receiving, through the user interface, an input indicating a change to the generated metadata template, updating the generated metadata template based on the input; and E) training the generative AI based on the input. Turner, however, teaches “wherein refining each generated metadata template comprises: presenting the generated metadata template through a user interface” as “the current subject matter may be configured to receive feedback from at least one user computing device. The feedback may be to the generated templates, associated portions of documents, and/or documents that have been generated using templates retrieved from template database” (Paragraph 34), “if a request to generate an electronic document is received, the current subject matter may be configured to use the object model to generate the document. It may select between the templates (e.g., initial template, updated template, etc.) that may be included in the object model and use the selected template to generate the electronic document. Moreover, a user may, via a computing device, provide feedback to refine the document generation. Using the feedback, the current subject matter may, for instance, update at least one of the templates to generate at least one of the other templates” (Paragraph 44), and “The feedback 220 may be any type of feedback, such as, for example, a yes/no vote (e.g., thumbs up, thumbs down, etc.) that may be indicative of the user's acceptance of and/or satisfaction with template/associated portions. The feedback 220 may be textual feedback that may include specific comments that may be written and sent to the document structure and clause extraction engine 150 by the user using the user device 218. As can be understood, any other type of feedback may be provided” (Paragraph 108), “receiving, through the user interface, an input indicating a user feedback related to the generated metadata template” as “the current subject matter may be configured to receive feedback from at least one user computing device. The feedback may be to the generated templates, associated portions of documents, and/or documents that have been generated using templates retrieved from template database” (Paragraph 34), “if a request to generate an electronic document is received, the current subject matter may be configured to use the object model to generate the document. It may select between the templates (e.g., initial template, updated template, etc.) that may be included in the object model and use the selected template to generate the electronic document. Moreover, a user may, via a computing device, provide feedback to refine the document generation. Using the feedback, the current subject matter may, for instance, update at least one of the templates to generate at least one of the other templates” (Paragraph 44), and “The feedback 220 may be any type of feedback, such as, for example, a yes/no vote (e.g., thumbs up, thumbs down, etc.) that may be indicative of the user's acceptance of and/or satisfaction with template/associated portions. The feedback 220 may be textual feedback that may include specific comments that may be written and sent to the document structure and clause extraction engine 150 by the user using the user device 218. As can be understood, any other type of feedback may be provided” (Paragraph 108), “determining, based on the received input, whether to change the generated metadata template” as “the current subject matter may be configured to receive feedback from at least one user computing device. The feedback may be to the generated templates, associated portions of documents, and/or documents that have been generated using templates retrieved from template database” (Paragraph 34), “if a request to generate an electronic document is received, the current subject matter may be configured to use the object model to generate the document. It may select between the templates (e.g., initial template, updated template, etc.) that may be included in the object model and use the selected template to generate the electronic document. Moreover, a user may, via a computing device, provide feedback to refine the document generation. Using the feedback, the current subject matter may, for instance, update at least one of the templates to generate at least one of the other templates” (Paragraph 44), and “The feedback 220 may be any type of feedback, such as, for example, a yes/no vote (e.g., thumbs up, thumbs down, etc.) that may be indicative of the user's acceptance of and/or satisfaction with template/associated portions. The feedback 220 may be textual feedback that may include specific comments that may be written and sent to the document structure and clause extraction engine 150 by the user using the user device 218. As can be understood, any other type of feedback may be provided” (Paragraph 108), “in response to determining to change the generated metadata template, receiving, through the user interface, an input indicating a change to the generated metadata template, updating the generated metadata template based on the input” as “the current subject matter may be configured to receive feedback from at least one user computing device. The feedback may be to the generated templates, associated portions of documents, and/or documents that have been generated using templates retrieved from template database” (Paragraph 34), “if a request to generate an electronic document is received, the current subject matter may be configured to use the object model to generate the document. It may select between the templates (e.g., initial template, updated template, etc.) that may be included in the object model and use the selected template to generate the electronic document. Moreover, a user may, via a computing device, provide feedback to refine the document generation. Using the feedback, the current subject matter may, for instance, update at least one of the templates to generate at least one of the other templates” (Paragraph 44), and “The feedback 220 may be any type of feedback, such as, for example, a yes/no vote (e.g., thumbs up, thumbs down, etc.) that may be indicative of the user's acceptance of and/or satisfaction with template/associated portions. The feedback 220 may be textual feedback that may include specific comments that may be written and sent to the document structure and clause extraction engine 150 by the user using the user device 218. As can be understood, any other type of feedback may be provided” (Paragraph 108), and “training the generative AI based on the input” as “the current subject matter may be configured to receive feedback from at least one user computing device. The feedback may be to the generated templates, associated portions of documents, and/or documents that have been generated using templates retrieved from template database. Once feedback is received, the current subject matter may be configured to update one or more templates stored in the template database. All templates may be updated. Alternatively, or in addition, only templates for a specific type of electronic document may be updated. Moreover, the feedback may be used to identify one or more machine learning (ML) models for generating templates for specific type(s) of electronic documents (e.g., one ML model may be used to generate templates for lease agreements, another ML model may be used to generate templates for non-disclosure agreements, etc.). In some embodiments, the feedback may be used to update an ML model that was used to generate a previous version of a template” (Paragraph 34), “the ML models may include at least one of the following: a large language model, at least another generative AI model, and any combination thereof. The generative AI models may be part of the current subject matter system and/or be one or more third party models (e.g., ChatGPT, Bard, DALL-E, Midjourney, DeepMind, etc.)” (Paragraph 35), “if a request to generate an electronic document is received, the current subject matter may be configured to use the object model to generate the document. It may select between the templates (e.g., initial template, updated template, etc.) that may be included in the object model and use the selected template to generate the electronic document. Moreover, a user may, via a computing device, provide feedback to refine the document generation. Using the feedback, the current subject matter may, for instance, update at least one of the templates to generate at least one of the other templates” (Paragraph 44), and “The feedback 220 may be any type of feedback, such as, for example, a yes/no vote (e.g., thumbs up, thumbs down, etc.) that may be indicative of the user's acceptance of and/or satisfaction with template/associated portions. The feedback 220 may be textual feedback that may include specific comments that may be written and sent to the document structure and clause extraction engine 150 by the user using the user device 218. As can be understood, any other type of feedback may be provided” (Paragraph 108). The examiner further notes that the secondary reference of Turner teaches the concept of receiving user feedback (via a user interface) to modify generated templates. Such feedback is also used to train ML models (which can be generative AI models). The combination would result in using user feedback to modify the generated templates of Puram and generated metadata templates of Fernandez. It would have been obvious to one of ordinary skill in the art before the effective filing date of instant invention to combine the teachings of the cited references because teaching Turner’s would have allowed Puram’s, Fernandez’s, and Gupta’s to provide a method for reducing errors and system resource usage, as noted by Turner (Paragraph 1). Allowable Subject Matter 21. Claims 3 and 17 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 10 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims. Specifically, although the prior art (See Puram) clearly teaches the determination of document types in a collaborative cloud based system, Fernandez uses predefined templates as a basis to generate new templates, Gadiya uses predefined templates are displayed to a user before generating any prompts, and Box teaches metadata templates (including the refining of preexisting templates), the detailed negative claim language of performing all of the generating of the natural language prompt for each determined document type, generating, from each natural language prompt a metadata template associated with the document type for which the natural language prompt was generated, and refining the generated templates in response to determining to not use any of the at least one predefined template via the defined cloud-based collaboration system is not found in the prior art, in conjunction with the rest of the limitations of the parent claims. Response to Arguments 22. Applicant’s arguments with respect to claims 1-20 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument (See newly applied reference of Gupta). Conclusion 23. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. U.S. PGPUB 2020/0065313 issued to Patel et al. on 27 February 2020. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to use metadata templates). U.S. PGPUB 2017/0093867 issued to Burns et al. on 30 March 2017. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to use metadata templates). U.S. PGPUB 2025/0117406 issued to Jalagam et al. on 10 April 2025. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to use metadata templates). Web Archive Article entitled “Metadata Templates”, by Box, available at https://web.archive.org/web/20230602180159/https://developer.box.com/guides/metadata/templates/, collectively dated 02 June 2023. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g., methods to use metadata templates). 24. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information 25. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Mahesh Dwivedi whose telephone number is (571) 272-2731. The examiner can normally be reached on Monday to Friday 8:20 am – 4:40 pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles Rones can be reached (571) 272-4085. The fax number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see 20. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Mahesh Dwivedi Primary Examiner Art Unit 2168 February 13, 2026 /MAHESH H DWIVEDI/Primary Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Jul 15, 2024
Application Filed
Oct 28, 2025
Non-Final Rejection — §103, §112
Jan 29, 2026
Response Filed
Feb 13, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591818
FORECASTING AND MITIGATING CONCEPT DRIFT USING NATURAL LANGUAGE PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12585690
COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION VERIFICATION PROGRAM, INFORMATION PROCESSING APPARATUS, AND INFORMATION PROCESSING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12561366
Real-Time Micro-Profile Generation Using a Dynamic Tree Structure
2y 5m to grant Granted Feb 24, 2026
Patent 12561469
INFERRING SCHEMA STRUCTURE OF FLAT FILE
2y 5m to grant Granted Feb 24, 2026
Patent 12554730
HYBRID DATABASE IMPLEMENTATIONS
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
69%
Grant Probability
74%
With Interview (+4.3%)
3y 6m
Median Time to Grant
Moderate
PTA Risk
Based on 751 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month