Prosecution Insights
Last updated: April 19, 2026
Application No. 18/626,009

SYSTEMS AND METHODS FOR GENERATING DYNAMIC DOCUMENT TEMPLATES USING OPTICAL CHARACTER RECOGNITION AND CLUSTERING TECHNIQUES

Non-Final OA §101§102§103§112
Filed
Apr 03, 2024
Examiner
REPSHER III, JOHN T
Art Unit
2143
Tech Center
2100 — Computer Architecture & Software
Assignee
State Farm Mutual Automobile Insurance Company
OA Round
1 (Non-Final)
58%
Grant Probability
Moderate
1-2
OA Rounds
3y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
203 granted / 347 resolved
+3.5% vs TC avg
Strong +48% interview lift
Without
With
+48.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
18 currently pending
Career history
365
Total Applications
across all art units

Statute-Specific Performance

§101
8.9%
-31.1% vs TC avg
§103
49.6%
+9.6% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
20.6%
-19.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 347 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION This action is in response to the original filing on 04/03/2024. Claims 1-28 are pending and have been considered below. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims 5, 8, 19, and 22 are objected to because of the following informalities: Claims 5 and 19 recite ‘the same’; however, they should recite - - a same - -. Claims 8 and 22 recite ‘the document’; however, they should recite - - a document - -. Claim 19 recites ‘a first text elements’; however, it should recite - - a first text element - -. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-28 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 1, claim 1 recites “analyze the text values for each text element of the plurality of text elements identified within each document to the text values for each text element of the plurality of text elements identified in other documents to determine a set of static text elements between at least a portion of the plurality of documents”. The claim lacks antecedent basis for “the text values for each text element” and “the text values for each text element of the plurality of text elements identified in other documents”. It is unclear how these limitations relate to previously recited text value limitations. It is further unclear how “the plurality of text elements identified within each document” is intended to relate to “a plurality of text elements located within each document”. It is further unclear whether “each document” is intended to refer to the variety of different documents, the batch of documents, or the plurality of documents of different document types. For the purposes of examination, this limitation is interpreted as: analyze text values for each text element of a first plurality of text elements identified within each document of a first plurality of documents to text values for each text element of a second plurality of text elements identified in other documents to determine a set of static text elements between at least a portion of the plurality of documents Claim 1 further recites “the at least a portion of the documents included within the batch of documents having matching sets of static text elements”. It is unclear how this limitation is intended to relate to the previously recited “a portion of the plurality of documents”. For the purposes of examination, this limitation is interpreted as: at least a first portion of first documents included within the batch of documents having matching sets of static text elements Regarding claim 15, claim 15 contains substantially similar limitations to those found in claim 1. Consequently, claim 15 is rejected for the same reasons. Regarding claims 2-14 and 16-28, claims 2-14 and 16-28 are also rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for depending on an indefinite parent claim. Regarding claim 7, claim 7 recites “identify static text values by comparing text values of documents to each other and applying a text element count to the most frequently repeated text values in the corresponding documents”. It is unclear which previous limitations “each other” is intended to refer. The claim lacks antecedent basis for “the most frequently”. It is unclear which previous limitation “the corresponding documents” is intended to refer. For the purposes of examination, this limitation is interpreted as: identify static text values by comparing first text values of documents to second text values and applying a text element count to a set of most frequently repeated text values in a first plurality of corresponding documents Regarding claim 21, claim 21 contains substantially similar limitations to those found in claim 7. Consequently, claim 21 is rejected for the same reasons. Regarding claim 9, claim 9 recites “compare each document to each other document”. It is unclear whether “each document” and “each other document” are intended to refer to the variety of different documents, the batch of documents, or the plurality of documents of different document types. For the purposes of examination, this limitation is interpreted as: compare each document of a first plurality of documents to each other document of a second plurality of documents Regarding claim 23, claim 23 contains substantially similar limitations to those found in claim 9. Consequently, claim 23 is rejected for the same reasons. Regarding claim 11, claim 11 recites “analyze the text value for each text element of the plurality of text elements identified within the document in comparison to one or more stored templates including a template for the first type, wherein each template is based upon a matching set of static text elements”. The claim lacks antecedent basis for “the text value for each text element”. It is further unclear how “the plurality of text elements identified within the document” is intended to relate to “a plurality of text elements located within the document”. It is further unclear which previously recited template in claims 1 and 11 “each template” is intended to refer. For the purposes of examination, this limitation is interpreted as: analyze a first text value for each text element of a first plurality of text elements identified within the document in comparison to one or more stored templates including a template for the first type, wherein each template of a first plurality of templates is based upon a matching set of static text elements Regarding claim 25, claim 25 contains substantially similar limitations to those found in claim 11. Consequently, claim 25 is rejected for the same reasons. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-28 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Claims 1 and 15 Step 1: Claims 1 and 15 recite a system and a method; therefore, they are directed to the statutory categories of a machine and a methods. Step 2A Prong 1: The claims recite, inter alia: identify a plurality of text elements located within each document of the batch of documents, wherein each text element includes a text value; analyze the text values for each text element of the plurality of text elements identified within each document to the text values for each text element of the plurality of text elements identified in other documents to determine a set of static text elements between at least a portion of the plurality of documents; and generate a template that represents the at least a portion of the documents included within the batch of documents having matching sets of static text elements; Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of analyzing texts in documents and generating a template based on matching content, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. Step 2A Prong 2: This judicial exception is not integrated into a practical application. The additional elements of “A template generation system for categorizing a variety of different documents, the template generation system comprising: at least one memory with instructions stored thereon; and at least one processor in communication with the at least one memory, wherein the instructions, when executed by the at least one processor, cause the at least one processor to”, and “A computer-implemented method of generating a template, the method implemented by a template generation server comprising a memory and a processor, the method comprising” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h). The claimed computer components are recited at a high level of generality and are merely invoked as tool to perform the abstract idea. The additional elements of “receive a batch of documents including a plurality of documents of different document types“ amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)). Even when viewed in combination, these additional element do not integrate the abstract idea into a practical application and the claims are thus directed to the abstract idea. Step 2B: The claims do not contain significantly more than the judicial exception. “A template generation system for categorizing a variety of different documents, the template generation system comprising: at least one memory with instructions stored thereon; and at least one processor in communication with the at least one memory, wherein the instructions, when executed by the at least one processor, cause the at least one processor to”, and “A computer-implemented method of generating a template, the method implemented by a template generation server comprising a memory and a processor, the method comprising” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The additional elements of “receive a batch of documents including a plurality of documents of different document types” amounts to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d); “Receiving or transmitting data over a network”). Nothing in the claims provides significantly more than that abstract idea. As such, the claims are ineligible. Claims 2-14 and 16-28 Step 1: Claims 2-14 and 16-28 recite systems and methods; therefore, they are directed to the statutory categories of machines and methods. Step 2: claims 2-14 and 16-28 merely narrow the previously recited abstract idea limitations. For the reasons described above with respect to claims 1 and 15, this judicial exception is not meaningfully integrated into a practical application, or significantly more than the abstract idea. The claims disclose similar limitations described for the independent claims above and do not provide anything more than the mental processes that are practically capable of being performed in the human mind with the assistance of pen and paper and mathematical concepts that are achievable through mathematical computation. Claims 2 and 16 further recite the additional element of “analyze the set of static text elements based upon one or more criterion to determine whether or not to generate the template”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of analyzing text to determine whether to generate a template, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claims 3 and 17 further recite the additional element of “generate a set of static text elements for each comparison of two or more documents”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of generating text based on a comparison, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claim 4 and 18 further recite the additional elements of “perform optical character recognition on each of the plurality of documents”. These elements amount to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)), and is a well-understood, routine, conventional activity (see MPEP § 2106.05(d); “Receiving or transmitting data over a network”). The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claims 5 and 19 further recite the additional element of “determine whether a first text element in a first document includes the same text value as a second text element in a second document”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of determining matching text, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claims 6 and 20 further recite the additional element of “determine whether a first text element in a first document includes a matching text value as a second text element in a second document”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of determining matching text, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claims 7 and 21 further recite the additional element of “identify static text values by comparing text values of documents to each other and applying a text element count to the most frequently repeated text values in the corresponding documents”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of comparing text and counting repeat values, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper and a mathematical concept that is achievable through mathematical computation. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claims 8 and 22 further recite the additional element of “determine a set of static text elements between at least a portion of the plurality of documents without identifying a location of any text element within the document”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of determining text within documents, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claims 9 and 23 further recite the additional element of “compare each document to each other document to determine a percentage match”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of comparing text and determining a matching percentage, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper and a mathematical concept that is achievable through mathematical computation. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claims 10 and 24 further recite the additional element of “wherein when the percentage match between two or more documents exceeds a threshold, the at least one processor is further programmed to determine a set of static text elements between the two or more documents”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of determining text within documents based on a percentage, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper and a mathematical concept that is achievable through mathematical computation. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claims 11 and 25 further recite the additional element of “identify a plurality of text elements located within the document; analyze the text value for each text element of the plurality of text elements identified within the document in comparison to one or more stored templates including a template for the first type, wherein each template is based upon a matching set of static text elements; and categorize the document as the first type based upon matching a template of the first type”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of categorizing a document based on a comparing text to a template, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). The additional element of “receive a document of a first type” amounts to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)). Claims 12 and 26 further recite the additional element of “if no match is found”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of determining no match is found, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). The additional element of “cache the document” amounts to insignificant extra-solution activity in the form of mere data gathering and output (see MPEP § 2106.05(g)). Claims 13 and 27 further recite the additional element of “generate a document type for each generated template; and assign the document type to each corresponding document of the set of static text elements”. Under its broadest reasonable interpretation in light of the specification, this limitation encompasses the mental process of determining a document type for a document, which is an evaluation or observation that is practically capable of being performed in the human mind with the assistance of pen and paper. The additional elements of “wherein the at least one processor is further programmed” and “The computer-implemented method” amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claims 14 and 18 further recite the additional element of “wherein the at least one processor is further programmed to store the template and the set of static text elements within a database” and “The computer-implemented method”. These elements amount to no more than generally linking the use of a judicial exception to a particular technological environment or field of use (see MPEP § 2106.05(h)). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-8, 11, 13-22, 25, 27, and 28 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Sampson et al. (US 8595235 B1, published 11/26/2013), hereinafter Sampson. Regarding claim 1, Sampson teaches the claim comprising: A template generation system for categorizing a variety of different documents, the template generation system comprising: at least one memory with instructions stored thereon; and at least one processor in communication with the at least one memory, wherein the instructions, when executed by the at least one processor, cause the at least one processor to (Sampson Figs. 1-17 col. 3 [line 21], A computer-implemented or computer-executable version of the invention may be embodied using, stored on, or associated with computer-readable medium or non-transitory computer-readable medium. A computer-readable medium may include any medium that participates in providing instructions to one or more processors for execution; col. 3 [line 45], FIG. 3 shows a system block diagram of computer system 201. As in FIG. 2, computer system 201 includes monitor 203, keyboard 209, and mass storage devices 217. Computer system 201 further includes subsystems such as central processor 302, system memory 304; col. 4 [line 50], the system receives as input to the training module a set of documents 425 that may be used to train the system. The training module outputs a set of document classes 430 and a set of document templates 435. Each document template is associated with a document class): receive a batch of documents including a plurality of documents of different document types; identify a plurality of text elements located within each document of the batch of documents, wherein each text element includes a text value (Sampson Figs. 1-17; col. 6 [line 49], FIG. 9 shows a simplified flow 905 for creating one or more classes based on a set of documents. The documents may be referred to as training documents; col. 6 [line 62], In a step 910, the system receives or gets a set of documents. The documents may be received from a scanner or other device capable of providing a digital image, digitized representation, or digital representation of physical document papers. That is, the documents may be digitized documents, scanned documents, or digital representations of physical documents. Some specific examples of documents include invoices, tax forms, applications (e.g., benefit enrollment), insurance claims, purchase orders, checks, financial documents, mortgage documents, health care records (e.g., patient records), legal documents, and so forth; col. 7 [line 14], In a step 915, for each document, the system generates a list of words. A list of words includes one or more words from the document); analyze the text values for each text element of the plurality of text elements identified within each document to the text values for each text element of the plurality of text elements identified in other documents to determine a set of static text elements between at least a portion of the plurality of documents; and generate a template that represents the at least a portion of the documents included within the batch of documents having matching sets of static text elements (Sampson Figs. 1-17; col. 4 [line 50], the system receives as input to the training module a set of documents 425 that may be used to train the system. The training module outputs a set of document classes 430 and a set of document templates 435. Each document template is associated with a document class; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document) in the set of documents. In a specific implementation, if the comparison indicates the first and second documents are similar, a document class and associated template are created for classifying documents similar to the first and second documents. If the comparison indicates the first and second documents are different, a first document class and associated first template is created for classifying documents similar to the first document. And, a second document class and associated second template is created for classifying documents similar to the second document; col. 5 [line 29], Applicants have recognized that structured and semi-structured documents may have certain patterns that are text-based such as "Total," "Invoice #," and so forth that appear in the same relative position in each document of the same class. This application discusses techniques for learning these common text patterns and their relative locations and applying this on production document images to provide improved grouping and classification methods; col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents. In a specific implementation, the clustering algorithm incorporates a similarity function (which may be referred to as a distance function) which is an algorithm that makes, among other things, a set of word pairs, each word pair including a word from a first document and a word from a second document. The system takes a pair of documents and returns a "distance." The "distance" can indicate whether or not the pair of documents are similar; col. 20 [line 43], a document template associated with a document class includes a set of keywords and location information indicating a location of a keyword in the template relative to one or more other keywords in the template. Upon creating the set of document classes based on grouping the set of training documents, the system can create a document template to be associated with each of the document classes. In other words, once there is a set of document images that are of the same class, system determines a set of words that appear in all (or at least most) of the documents. The set of words may be referred to as the keywords of a template; col. 22 [line 66], (e.g., the words "total," "tax," and "subtotal" may appear in the same positions consistently and if two of the three are found one may be fairly sure to have found the right place) Regarding claim 15, claim 15 contains substantially similar limitations to those found in claim 1. Consequently, claim 15 is rejected for the same reasons. Regarding claim 2, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to analyze the set of static text elements based upon one or more criterion to determine whether or not to generate the template (Sampson Figs. 1-17; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document) in the set of documents. In a specific implementation, if the comparison indicates the first and second documents are similar, a document class and associated template are created for classifying documents similar to the first and second documents. If the comparison indicates the first and second documents are different, a first document class and associated first template is created for classifying documents similar to the first document. And, a second document class and associated second template is created for classifying documents similar to the second document; col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents. In a specific implementation, the clustering algorithm incorporates a similarity function (which may be referred to as a distance function) which is an algorithm that makes, among other things, a set of word pairs, each word pair including a word from a first document and a word from a second document. The system takes a pair of documents and returns a "distance." The "distance" can indicate whether or not the pair of documents are similar; see also col. 4 [line 50], col. 20 [line 43]) Regarding claim 16, claim 16 contains substantially similar limitations to those found in claim 2. Consequently, claim 16 is rejected for the same reasons. Regarding claim 3, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to generate a set of static text elements for each comparison of two or more documents (Sampson Figs. 1-17; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document) in the set of documents. In a specific implementation, if the comparison indicates the first and second documents are similar, a document class and associated template are created for classifying documents similar to the first and second documents. If the comparison indicates the first and second documents are different, a first document class and associated first template is created for classifying documents similar to the first document. And, a second document class and associated second template is created for classifying documents similar to the second document; col. 5 [line 29], Applicants have recognized that structured and semi-structured documents may have certain patterns that are text-based such as "Total," "Invoice #," and so forth that appear in the same relative position in each document of the same class. This application discusses techniques for learning these common text patterns and their relative locations and applying this on production document images to provide improved grouping and classification methods; col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents. In a specific implementation, the clustering algorithm incorporates a similarity function (which may be referred to as a distance function) which is an algorithm that makes, among other things, a set of word pairs, each word pair including a word from a first document and a word from a second document. The system takes a pair of documents and returns a "distance." The "distance" can indicate whether or not the pair of documents are similar; col. 20 [line 43], a document template associated with a document class includes a set of keywords and location information indicating a location of a keyword in the template relative to one or more other keywords in the template. Upon creating the set of document classes based on grouping the set of training documents, the system can create a document template to be associated with each of the document classes. In other words, once there is a set of document images that are of the same class, system determines a set of words that appear in all (or at least most) of the documents. The set of words may be referred to as the keywords of a template; col. 22 [line 66], (e.g., the words "total," "tax," and "subtotal" may appear in the same positions consistently and if two of the three are found one may be fairly sure to have found the right place; see also col. 4 [line 50]) Regarding claim 17, claim 17 contains substantially similar limitations to those found in claim 3. Consequently, claim 17 is rejected for the same reasons. Regarding claim 4, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to perform optical character recognition on each of the plurality of documents (Sampson Figs. 1-17; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document) in the set of documents; col. 5 [line 44], A specific application of the system is to capture data from scanned images including structured, semi-structured documents, or both such as invoices and forms. Classification is the process of deciding whether an object belongs in a particular class (from a set of classes). In order to classify, the system can provide a set of templates defining each object class. A training step takes a set of images and from these creates a set of classes. The images may be images of documents. That is, physical documents that have been scanned via a scanner and output as optical character recognition (OCR) data, i.e., scanned or digitized documents; col. 6 [line 49], FIG. 9 shows a simplified flow 905 for creating one or more classes based on a set of documents. The documents may be referred to as training documents; col. 6 [line 62], In a step 910, the system receives or gets a set of documents; col. 7 [line 8], the received document data includes optical character recognition (OCR) data such as a set of characters with position information, confidence information, or both. The received document data may include a set of OCR data sets, each data set being associated with a document, and including a list of characters or words; col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents; col. 20 [line 43], Upon creating the set of document classes based on grouping the set of training documents, the system can create a document template to be associated with each of the document classes; see also col. 4 [line 50]) Regarding claim 18, claim 18 contains substantially similar limitations to those found in claim 4. Consequently, claim 18 is rejected for the same reasons. Regarding claim 5, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to determine whether a first text element in a first document includes the same text value as a second text element in a second document (Sampson Figs. 1-17; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document); col. 5 [line 29], Applicants have recognized that structured and semi-structured documents may have certain patterns that are text-based such as "Total," "Invoice #," and so forth that appear in the same relative position in each document of the same class. This application discusses techniques for learning these common text patterns and their relative locations and applying this on production document images to provide improved grouping and classification methods; col. 7 [line 20], in some places on forms and invoices where a number might appear, the number is likely to vary (e.g., a "Total: $123.00" and "Total: $999.99" or "Nov. 24, 2011" versus "Oct. 19, 2012"). Thus, in a specific implementation, a pretreatment technique includes altering the digits to a predefined value such as 0 to allow the system to consider different numerical values between two documents to be the "same" ; col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents. In a specific implementation, the clustering algorithm incorporates a similarity function (which may be referred to as a distance function) which is an algorithm that makes, among other things, a set of word pairs, each word pair including a word from a first document and a word from a second document. The system takes a pair of documents and returns a "distance." The "distance" can indicate whether or not the pair of documents are similar; col. 8 [line 21], These words--"Dresden," "INVOICE," "DATE," and "TOTAL"--all appear in the same place on examples of the invoices; col. 20 [line 43], a document template associated with a document class includes a set of keywords and location information indicating a location of a keyword in the template relative to one or more other keywords in the template. Upon creating the set of document classes based on grouping the set of training documents, the system can create a document template to be associated with each of the document classes. In other words, once there is a set of document images that are of the same class, system determines a set of words that appear in all (or at least most) of the documents. The set of words may be referred to as the keywords of a template; col. 22 [line 66], (e.g., the words "total," "tax," and "subtotal" may appear in the same positions consistently and if two of the three are found one may be fairly sure to have found the right place; see also col. 4 [line 50]) Regarding claim 19, claim 19 contains substantially similar limitations to those found in claim 5. Consequently, claim 19 is rejected for the same reasons. Regarding claim 6, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to determine whether a first text element in a first document includes a matching text value as a second text element in a second document (Sampson Figs. 1-17; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document); col. 5 [line 29], Applicants have recognized that structured and semi-structured documents may have certain patterns that are text-based such as "Total," "Invoice #," and so forth that appear in the same relative position in each document of the same class. This application discusses techniques for learning these common text patterns and their relative locations and applying this on production document images to provide improved grouping and classification methods; col. 7 [line 20], in some places on forms and invoices where a number might appear, the number is likely to vary (e.g., a "Total: $123.00" and "Total: $999.99" or "Nov. 24, 2011" versus "Oct. 19, 2012"). Thus, in a specific implementation, a pretreatment technique includes altering the digits to a predefined value such as 0 to allow the system to consider different numerical values between two documents to be the "same" ; col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents. In a specific implementation, the clustering algorithm incorporates a similarity function (which may be referred to as a distance function) which is an algorithm that makes, among other things, a set of word pairs, each word pair including a word from a first document and a word from a second document. The system takes a pair of documents and returns a "distance." The "distance" can indicate whether or not the pair of documents are similar; col. 8 [line 21], These words--"Dresden," "INVOICE," "DATE," and "TOTAL"--all appear in the same place on examples of the invoices; col. 20 [line 43], a document template associated with a document class includes a set of keywords and location information indicating a location of a keyword in the template relative to one or more other keywords in the template. Upon creating the set of document classes based on grouping the set of training documents, the system can create a document template to be associated with each of the document classes. In other words, once there is a set of document images that are of the same class, system determines a set of words that appear in all (or at least most) of the documents. The set of words may be referred to as the keywords of a template; col. 22 [line 66], (e.g., the words "total," "tax," and "subtotal" may appear in the same positions consistently and if two of the three are found one may be fairly sure to have found the right place; see also col. 4 [line 50]) Regarding claim 20, claim 20 contains substantially similar limitations to those found in claim 6. Consequently, claim 20 is rejected for the same reasons. Regarding claim 7, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to identify static text values by comparing text values of documents to each other and applying a text element count to the most frequently repeated text values in the corresponding documents (Sampson Figs. 1-17; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document); col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents. In a specific implementation, the clustering algorithm incorporates a similarity function (which may be referred to as a distance function) which is an algorithm that makes, among other things, a set of word pairs, each word pair including a word from a first document and a word from a second document. The system takes a pair of documents and returns a "distance." The "distance" can indicate whether or not the pair of documents are similar; col. 20 [line 43], once there is a set of document images that are of the same class, system determines a set of words that appear in all (or at least most) of the documents. The set of words may be referred to as the keywords of a template; col. 20 [line 23], FIG. 17 shows a flow 1705 for creating document templates; col. 20 [line 58], a keyword learning algorithm takes the collection of document images in a class and outputs a set of words in common. The algorithm starts by getting or obtaining the common set of words between each pair of documents; col. 20 [line 64], The system then creates a matrix of words in each document (e.g., docCount X words); col. 21 [line 11], This generates a list giving the information, for example, "word X appears in document A, B, C and D," "word Y appears in documents A and D," and so forth. The list may include a word, and a number of documents that the word has been found in, an identification of the documents that the word has been found in, or both; col. 21 [line 17], the system then sorts this list by another scoring function (which is a different scoring function from the distance function) that takes into account the number of documents a word is found in; col. 21 [line 22], A selection is made of the top N words that have a score at least equal to a threshold; col. 21 [line 37], The word text to be used is the word which occurs most often or most frequently; see also col. 4 [line 50], col. 5 [line 29], col. 8 [line 21], col. 7 [line 20], col. 22 [line 66]) Regarding claim 21, claim 21 contains substantially similar limitations to those found in claim 7. Consequently, claim 21 is rejected for the same reasons. Regarding claim 8, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to determine a set of static text elements between at least a portion of the plurality of documents without identifying a location of any text element within the document (Sampson Figs. 1-17; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document); col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents. In a specific implementation, the clustering algorithm incorporates a similarity function (which may be referred to as a distance function) which is an algorithm that makes, among other things, a set of word pairs, each word pair including a word from a first document and a word from a second document. The system takes a pair of documents and returns a "distance." The "distance" can indicate whether or not the pair of documents are similar; col. 20 [line 43], once there is a set of document images that are of the same class, system determines a set of words that appear in all (or at least most) of the documents. The set of words may be referred to as the keywords of a template; col. 20 [line 23], FIG. 17 shows a flow 1705 for creating document templates; col. 20 [line 58], a keyword learning algorithm takes the collection of document images in a class and outputs a set of words in common. The algorithm starts by getting or obtaining the common set of words between each pair of documents; col. 20 [line 64], The system then creates a matrix of words in each document (e.g., docCount X words); col. 21 [line 11], This generates a list giving the information, for example, "word X appears in document A, B, C and D," "word Y appears in documents A and D," and so forth. The list may include a word, and a number of documents that the word has been found in, an identification of the documents that the word has been found in, or both; col. 21 [line 17], the system then sorts this list by another scoring function (which is a different scoring function from the distance function) that takes into account the number of documents a word is found in; col. 21 [line 22], A selection is made of the top N words that have a score at least equal to a threshold; col. 21 [line 37], The word text to be used is the word which occurs most often or most frequently; see also col. 4 [line 50], col. 5 [line 29], col. 8 [line 21], col. 7 [line 20], col. 22 [line 66]) Regarding claim 22, claim 22 contains substantially similar limitations to those found in claim 8. Consequently, claim 22 is rejected for the same reasons. Regarding claim 11, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to: receive a document of a first type; identify a plurality of text elements located within the document; analyze the text value for each text element of the plurality of text elements identified within the document in comparison to one or more stored templates including a template for the first type, wherein each template is based upon a matching set of static text elements; and categorize the document as the first type based upon matching a template of the first type (Sampson Figs. 1-17; col. 5 [line 21], During a classification step, the location comparison engine can also be used to classify a document into a particular document class using the templates. For example, the location comparison engine can be used to compare the document to be classified against the document templates. Based on the comparison between the document and a document template, the document may be classified into a document class associated with the document template; col. 5 [line 44], The classification step then compares an image with each of the classes and decides in which class or classes the image belongs. In a specific implementation, if the image belongs to only one class the image may be considered classified; col. 5 [line 62], there is an automated training step, a classification step, or both which use a comparison function (which may be referred to as a distance function) to determine whether an image is "close" to another image or template. The training and classification algorithms may use this comparison function; col. 6 [line 62], There is a "classification" function that compares an image and a reference set of keywords; col. 20 [line 43], a document template associated with a document class includes a set of keywords col. 21 [line 42], a document template is created that includes the keywords. Upon receipt of a document to be classified, the document is compared against the template and is classified in response to the comparison (see FIG. 17 and accompanying discussion above); col. 21 [line 47], a template includes a set of keywords and first location information that indicates a location of a keyword in a template relative to one or more other keywords in the template. The system receives a document to be classified; col. 22 [line 19], To classify a document using a template, the system generates a set of word pairs. Each word pair includes a keyword from the set of keywords of the selected template and a corresponding word from the document to be classified--see step 1115 (FIG. 11) and accompanying description of generating word pairs.; col. 21 [line 11], This generates a list giving the information, for example, "word X appears in document A, B, C and D," "word Y appears in documents A and D," and so forth. The list may include a word, and a number of documents that the word has been found in, an identification of the documents that the word has been found in, or both; col. 22 [line 31], determine whether or not the received document should be classified in the document class associated with the template. Classifying the document in the document class may include tagging the document with a tag or other metadata information that indicates the document class; see also col. 4 [line 50], col. 5 [line 29], col. 8 [line 21], col. 7 [line 20], col. 22 [line 66]) Regarding claim 25, claim 25 contains substantially similar limitations to those found in claim 11. Consequently, claim 25 is rejected for the same reasons. Regarding claim 13, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to: generate a document type for each generated template; and assign the document type to each corresponding document of the set of static text elements. (Sampson Figs. 1-17; col. 4 [line 50], the system receives as input to the training module a set of documents 425 that may be used to train the system. The training module outputs a set of document classes 430 and a set of document templates 435. Each document template is associated with a document class; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document) in the set of documents. In a specific implementation, if the comparison indicates the first and second documents are similar, a document class and associated template are created for classifying documents similar to the first and second documents. If the comparison indicates the first and second documents are different, a first document class and associated first template is created for classifying documents similar to the first document. And, a second document class and associated second template is created for classifying documents similar to the second document; col. 5 [line 21], During a classification step, the location comparison engine can also be used to classify a document into a particular document class using the templates. For example, the location comparison engine can be used to compare the document to be classified against the document templates. Based on the comparison between the document and a document template, the document may be classified into a document class associated with the document template; col. 5 [line 44], The classification step then compares an image with each of the classes and decides in which class or classes the image belongs. In a specific implementation, if the image belongs to only one class the image may be considered classified; col. 5 [line 62], there is an automated training step, a classification step, or both which use a comparison function (which may be referred to as a distance function) to determine whether an image is "close" to another image or template. The training and classification algorithms may use this comparison function; col. 6 [line 62], There is a "classification" function that compares an image and a reference set of keywords; col. 20 [line 43], a document template associated with a document class includes a set of keywords col. 21 [line 42], a document template is created that includes the keywords. Upon receipt of a document to be classified, the document is compared against the template and is classified in response to the comparison (see FIG. 17 and accompanying discussion above); col. 21 [line 47], a template includes a set of keywords and first location information that indicates a location of a keyword in a template relative to one or more other keywords in the template. The system receives a document to be classified; col. 22 [line 19], To classify a document using a template, the system generates a set of word pairs. Each word pair includes a keyword from the set of keywords of the selected template and a corresponding word from the document to be classified--see step 1115 (FIG. 11) and accompanying description of generating word pairs.; col. 21 [line 11], This generates a list giving the information, for example, "word X appears in document A, B, C and D," "word Y appears in documents A and D," and so forth. The list may include a word, and a number of documents that the word has been found in, an identification of the documents that the word has been found in, or both; col. 22 [line 31], determine whether or not the received document should be classified in the document class associated with the template. Classifying the document in the document class may include tagging the document with a tag or other metadata information that indicates the document class; see also col. 4 [line 50], col. 5 [line 29], col. 8 [line 21], col. 7 [line 20], col. 22 [line 66]) Regarding claim 27, claim 27 contains substantially similar limitations to those found in claim 13. Consequently, claim 27 is rejected for the same reasons. Regarding claim 14, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to store the template and the set of static text elements within a database (Sampson Figs. 1-17; col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents. In a specific implementation, the clustering algorithm incorporates a similarity function (which may be referred to as a distance function) which is an algorithm that makes, among other things, a set of word pairs, each word pair including a word from a first document and a word from a second document. The system takes a pair of documents and returns a "distance." The "distance" can indicate whether or not the pair of documents are similar; col. 20 [line 23], The templates may be stored in a template database; col. 20 [line 43], a document template associated with a document class includes a set of keywords and location information indicating a location of a keyword in the template relative to one or more other keywords in the template. Upon creating the set of document classes based on grouping the set of training documents, the system can create a document template to be associated with each of the document classes. In other words, once there is a set of document images that are of the same class, system determines a set of words that appear in all (or at least most) of the documents. The set of words may be referred to as the keywords of a template; see also col. 4 [line 50], col. 4 [line 65],) Regarding claim 28, claim 28 contains substantially similar limitations to those found in claim 14. Consequently, claim 28 is rejected for the same reasons. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 9, 10, 12, 23, 24, and 26 are rejected under 35 U.S.C. 103 as being unpatentable over Sampson in view of Downs (US 20210256216 A1, published 08/19/2021), hereinafter Downs. Regarding claim 9, Sampson teaches all the limitations of claim 1, further comprising: wherein the at least one processor is further programmed to compare each document to each other document to determine a similarity match (Sampson Figs. 1-17; col. 4 [line 50], the system receives as input to the training module a set of documents 425 that may be used to train the system. The training module outputs a set of document classes 430 and a set of document templates 435. Each document template is associated with a document class; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document) in the set of documents. In a specific implementation, if the comparison indicates the first and second documents are similar, a document class and associated template are created for classifying documents similar to the first and second documents. If the comparison indicates the first and second documents are different, a first document class and associated first template is created for classifying documents similar to the first document. And, a second document class and associated second template is created for classifying documents similar to the second document; col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents. In a specific implementation, the clustering algorithm incorporates a similarity function (which may be referred to as a distance function) which is an algorithm that makes, among other things, a set of word pairs, each word pair including a word from a first document and a word from a second document. The system takes a pair of documents and returns a "distance." The "distance" can indicate whether or not the pair of documents are similar; col. 20 [line 43], once there is a set of document images that are of the same class, system determines a set of words that appear in all (or at least most) of the documents. The set of words may be referred to as the keywords of a template; col. 22 [line 66], (e.g., the words "total," "tax," and "subtotal" may appear in the same positions consistently and if two of the three are found one may be fairly sure to have found the right place) However, Sampson fails to expressly disclose a percentage match. In the same field of endeavor, Downs teaches: a percentage match (Downs Figs. 1-17; [0048], identification engine 202 may identify common document content that is identical among documents included in the set of electronic document templates. In another example, identification engine 202 may identify first common document content that is common content among documents included in the set of electronic document templates by determining that content (e.g., text, an image, or a document structure) of a first document of the set of electronic document templates is not identical, but is within a pre-defined degree of similarity of content with respect to a second document of the set of electronic document templates. For example, textual content may be determined to be within a pre-defined degree of similarity if a predetermined percentage of the content (e.g., 90%, or 9 out of 10 words) is identical; [0050], engine 202 identifies first common document content that is shared content among documents included in the set of electronic document templates on the basis of a pre-defined degree of similarity, component template engine 206 may create the component template to include a user-selected version of the first common content; [0082] FIG. 13 is a flow diagram of implementation of a method for creating component templates, where the method includes identifying common document content based on a pre-defined degree of similarity between content instances) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated a percentage match as suggested in Downs into Sampson. Doing so would be desirable because this disclosure relates generally to management of content, and more particularly to systems, methods and products for identifying common content such as semantically similar content in electronic document templates and creating component templates based on the identified common content (see Downs [0001]). The exponential growth of these templates as design users utilize CCM applications can pose significant challenges to an enterprise, however. The enterprise's documents to be sent to clients typically evolve over time due to changing needs, resulting in multiple versions of templates. When a new letter or other document is needed, a design user will often simply copy an existing template and modify it to suit the new needs. This can result in duplicate content which can become expensive to maintain in terms due to the user resources required to maintain the swelling template library of templates. Such a process can also pose security issues with respect to personal information, as client information may be inadvertently stored as part of old templates. Additionally, as some of the templates may be stored in non-human-readable formats, designer users may have additional difficulties in identifying and working with duplicate templates (see Downs [0026]). The disclosed examples provide for an efficient and easy to use method and system for creation of component templates. The disclosed examples thus enable easy identification of inefficiencies caused by duplicative document designs, intelligent creation of component templates, and storage of the created templates in template libraries accessible to the designer users during project development. Users of CCM applications and other applications should each appreciate the reduced costs, time savings, and increased document quality to be enjoyed with utilization of the disclosed examples relative to manual creation and maintenance of template libraries (see Downs [0034]). Regarding claim 23, claim 23 contains substantially similar limitations to those found in claim 9. Consequently, claim 23 is rejected for the same reasons. Regarding claim 10, Sampson in view of Downs teaches all the limitations of claim 9, further comprising: wherein when the similarity match between two or more documents exceeds a threshold, the at least one processor is further programmed to determine a set of static text elements between the two or more documents (Sampson Figs. 1-17; col. 4 [line 50], the system receives as input to the training module a set of documents 425 that may be used to train the system. The training module outputs a set of document classes 430 and a set of document templates 435. Each document template is associated with a document class; col. 4 [line 65], during a training step the location comparison engine is used to compare a document (e.g., first document) in the set of documents with another document (e.g., second document) in the set of documents. In a specific implementation, if the comparison indicates the first and second documents are similar, a document class and associated template are created for classifying documents similar to the first and second documents. If the comparison indicates the first and second documents are different, a first document class and associated first template is created for classifying documents similar to the first document. And, a second document class and associated second template is created for classifying documents similar to the second document; col. 8 [line 1], In a step 920, the system runs a clustering algorithm that compares the documents (using the generated word lists) and groups similar documents. In a specific implementation, the clustering algorithm incorporates a similarity function (which may be referred to as a distance function) which is an algorithm that makes, among other things, a set of word pairs, each word pair including a word from a first document and a word from a second document. The system takes a pair of documents and returns a "distance." The "distance" can indicate whether or not the pair of documents are similar; col. 20 [line 43], once there is a set of document images that are of the same class, system determines a set of words that appear in all (or at least most) of the documents. The set of words may be referred to as the keywords of a template; col. 22 [line 66], (e.g., the words "total," "tax," and "subtotal" may appear in the same positions consistently and if two of the three are found one may be fairly sure to have found the right place) Downs further teaches: the percentage match (Downs Figs. 1-17; [0048], identification engine 202 may identify common document content that is identical among documents included in the set of electronic document templates. In another example, identification engine 202 may identify first common document content that is common content among documents included in the set of electronic document templates by determining that content (e.g., text, an image, or a document structure) of a first document of the set of electronic document templates is not identical, but is within a pre-defined degree of similarity of content with respect to a second document of the set of electronic document templates. For example, textual content may be determined to be within a pre-defined degree of similarity if a predetermined percentage of the content (e.g., 90%, or 9 out of 10 words) is identical; [0050], engine 202 identifies first common document content that is shared content among documents included in the set of electronic document templates on the basis of a pre-defined degree of similarity, component template engine 206 may create the component template to include a user-selected version of the first common content; [0082] FIG. 13 is a flow diagram of implementation of a method for creating component templates, where the method includes identifying common document content based on a pre-defined degree of similarity between content instances) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated the percentage match as suggested in Downs into Sampson. Doing so would be desirable because this disclosure relates generally to management of content, and more particularly to systems, methods and products for identifying common content such as semantically similar content in electronic document templates and creating component templates based on the identified common content (see Downs [0001]). The exponential growth of these templates as design users utilize CCM applications can pose significant challenges to an enterprise, however. The enterprise's documents to be sent to clients typically evolve over time due to changing needs, resulting in multiple versions of templates. When a new letter or other document is needed, a design user will often simply copy an existing template and modify it to suit the new needs. This can result in duplicate content which can become expensive to maintain in terms due to the user resources required to maintain the swelling template library of templates. Such a process can also pose security issues with respect to personal information, as client information may be inadvertently stored as part of old templates. Additionally, as some of the templates may be stored in non-human-readable formats, designer users may have additional difficulties in identifying and working with duplicate templates (see Downs [0026]). The disclosed examples provide for an efficient and easy to use method and system for creation of component templates. The disclosed examples thus enable easy identification of inefficiencies caused by duplicative document designs, intelligent creation of component templates, and storage of the created templates in template libraries accessible to the designer users during project development. Users of CCM applications and other applications should each appreciate the reduced costs, time savings, and increased document quality to be enjoyed with utilization of the disclosed examples relative to manual creation and maintenance of template libraries (see Downs [0034]). Regarding claim 24, claim 24 contains substantially similar limitations to those found in claim 10. Consequently, claim 24 is rejected for the same reasons. Regarding claim 12, Sampson teaches all the limitations of claim 11, further comprising: wherein the at least one processor is further programmed to analyze the document if no match is found (Sampson Figs. 1-17; col. 4 [line 50], The set of document classes and templates are provided to the classification module. The classification module receives as input a document 440 to be classified. The classification module outputs a classification result 445. The classification result may specify the document class in which the document should be classified; col. 5 [line 21], During a classification step, the location comparison engine can also be used to classify a document into a particular document class using the templates; col. 5 [line 44], The classification step then compares an image with each of the classes and decides in which class or classes the image belongs. In a specific implementation, if the image belongs to only one class the image may be considered classified; The classification step then compares an image with each of the classes and decides in which class or classes the image belongs. In a specific implementation, if the image belongs to only one class the image may be considered classified. Otherwise, the image may either be over-classified or not classified at all; col. 5 [line 62], there is an automated training step, a classification step, or both which use a comparison function (which may be referred to as a distance function) to determine whether an image is "close" to another image or template. The training and classification algorithms may use this comparison function; col. 6 [line 62], There is a "classification" function that compares an image and a reference set of keywords; col. 8 [line 57], If there is no other class to compare, in a step 1045, the system creates a new class and classifies the selected document in the new class, the selected document now being a classified document; col. 20 [line 43], a document template associated with a document class includes a set of keywords col. 21 [line 42], a document template is created that includes the keywords. Upon receipt of a document to be classified, the document is compared against the template and is classified in response to the comparison (see FIG. 17 and accompanying discussion above); col. 21 [line 47], a template includes a set of keywords and first location information that indicates a location of a keyword in a template relative to one or more other keywords in the template. The system receives a document to be classified; col. 21 [line 64], It should be appreciated that due to training errors, OCR errors, and other problems with the document image, there may not be 100 percent of the keywords of a template found in the received document; col. 22 [line 3], If there are a sufficient number of words found, the document should be able to be classified. The percentage of words found can be compared to a threshold value; boolean is Classified=(score>threshold);; col. 21 [line 11], This generates a list giving the information, for example, "word X appears in document A, B, C and D," "word Y appears in documents A and D," and so forth. The list may include a word, and a number of documents that the word has been found in, an identification of the documents that the word has been found in, or both; see also col. 4 [line 50], col. 5 [line 29], col. 8 [line 21], col. 7 [line 20], col. 22 [line 66]) However, Sampson fails to expressly disclose cache the document. In the same field of endeavor, Downs teaches: cache the document (Downs Figs. 1-17; [0048], identification engine 202 may identify common document content that is identical among documents included in the set of electronic document templates. In another example, identification engine 202 may identify first common document content that is common content among documents included in the set of electronic document templates by determining that content (e.g., text, an image, or a document structure) of a first document of the set of electronic document templates is not identical, but is within a pre-defined degree of similarity of content with respect to a second document of the set of electronic document templates. For example, textual content may be determined to be within a pre-defined degree of similarity if a predetermined percentage of the content (e.g., 90%, or 9 out of 10 words) is identical; [0054], Document analysis engine 214 represents generally a combination of hardware and programming to obtain a subject document and to analyze the subject document to identify a set of sections in the subject document (“subject document sections”). In examples, the analyzed subject document may be a new document, e.g., a document that has not been previously analyzed to identify sections or to create component templates. In a particular example, the analyzed subject document may be a hard copy document, with the analysis including an image capture of the hard copy document. Document analysis engine 214 is to in turn search the database of component templates to determine that the first component template stored in the database is a duplicate to a first subject document section. Document analysis engine 214 is to then create and store in the database subject component templates for the set of identified subject document sections, except that document analysis engine 214 does not create and store a subject component template for the first subject document section that is a duplicate to the first component template that was already included in the database) It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to have incorporated a cache the document as suggested in Downs into Sampson. Doing so would be desirable because this disclosure relates generally to management of content, and more particularly to systems, methods and products for identifying common content such as semantically similar content in electronic document templates and creating component templates based on the identified common content (see Downs [0001]). The exponential growth of these templates as design users utilize CCM applications can pose significant challenges to an enterprise, however. The enterprise's documents to be sent to clients typically evolve over time due to changing needs, resulting in multiple versions of templates. When a new letter or other document is needed, a design user will often simply copy an existing template and modify it to suit the new needs. This can result in duplicate content which can become expensive to maintain in terms due to the user resources required to maintain the swelling template library of templates. Such a process can also pose security issues with respect to personal information, as client information may be inadvertently stored as part of old templates. Additionally, as some of the templates may be stored in non-human-readable formats, designer users may have additional difficulties in identifying and working with duplicate templates (see Downs [0026]). The disclosed examples provide for an efficient and easy to use method and system for creation of component templates. The disclosed examples thus enable easy identification of inefficiencies caused by duplicative document designs, intelligent creation of component templates, and storage of the created templates in template libraries accessible to the designer users during project development. Users of CCM applications and other applications should each appreciate the reduced costs, time savings, and increased document quality to be enjoyed with utilization of the disclosed examples relative to manual creation and maintenance of template libraries (see Downs [0034]). Additionally, the caching system of Downs would improve the system of Sampson because In this manner, document analysis engine 214 is to identify subject document content and is to create and to store component templates for such subject document content, while avoiding duplication of efforts and resources where it is determined content of the subject document is already represented by a component template stored in the database (see Downs [0050]). Regarding claim 26, claim 26 contains substantially similar limitations to those found in claim 12. Consequently, claim 26 is rejected for the same reasons. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Singaraju (US 20240233427 A1) see Figs. 1-13 and [0054-0065], [0209]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JOHN T REPSHER III whose telephone number is (571)272-7487. The examiner can normally be reached Monday - Friday, 8AM-5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Welch can be reached at (571) 272-7212. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JOHN T REPSHER III/ Primary Examiner, Art Unit 2143
Read full office action

Prosecution Timeline

Apr 03, 2024
Application Filed
Jan 16, 2026
Non-Final Rejection — §101, §102, §103
Apr 07, 2026
Interview Requested
Apr 16, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12574602
CONTROL DISPLAY METHOD, ELECTRONIC DEVICE, AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12568166
TIME-AVERAGED PROXIMITY SENSOR
2y 5m to grant Granted Mar 03, 2026
Patent 12554991
Device and Method for Performing Self-Learning Operations of an Artificial Neural Network
2y 5m to grant Granted Feb 17, 2026
Patent 12511029
USER INTERFACE FOR AN AUTOMATED MASSAGE SYSTEM WITH BODY MODEL AND CONTROL OBJECT
2y 5m to grant Granted Dec 30, 2025
Patent 12483602
COMPUTER IMPLEMENTED METHOD AND APPARATUS FOR MANAGEMENT OF NON-BINARY PRIVILEGES IN A STRUCTURED USER ENVIRONMENT
2y 5m to grant Granted Nov 25, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
58%
Grant Probability
99%
With Interview (+48.0%)
3y 5m
Median Time to Grant
Low
PTA Risk
Based on 347 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month