Prosecution Insights
Last updated: April 19, 2026
Application No. 18/533,685

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Final Rejection §101§103§112
Filed
Dec 08, 2023
Examiner
DIGUGLIELMO, DANIELLA MARIE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Canon Kabushiki Kaisha
OA Round
2 (Final)
81%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
137 granted / 170 resolved
+18.6% vs TC avg
Strong +26% interview lift
Without
With
+26.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
25 currently pending
Career history
195
Total Applications
across all art units

Statute-Specific Performance

§101
12.9%
-27.1% vs TC avg
§103
35.5%
-4.5% vs TC avg
§102
10.4%
-29.6% vs TC avg
§112
33.1%
-6.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 170 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-13 are pending. Response to Arguments Applicant’s arguments, see p. 7 of the remarks, filed 1/10/26, with respect to the abstract have been fully considered and are persuasive. The abstract objection of 10/10/25 has been withdrawn. Applicant's arguments filed 1/10/26 regarding the claim objections, 35 U.S.C. 112(b) rejections, 35 U.S.C. 103 rejections, and the 35 U.S.C. 101 rejections have been fully considered but they are not persuasive. First, Applicant argues in p. 7 of the remarks that the objections to claims 4, 5, 7, and 11 should be withdrawn. The Examiner respectfully disagrees. In each of these claims, Applicant has not addressed the objection regarding “a corresponding character string is not extracted”. Therefore, these claim objections have been maintained. Second, Applicant argues in p. 7 of the remarks that the 35 U.S.C. 112(b) rejections for claims 1, 8, 12, and 13 should be withdrawn. The Examiner respectfully disagrees. Applicant has not addressed the 112(b) rejections regarding the “input document image” and “character string…obtained” limitations. It is still unclear and indefinite if the document image and input document image are the same. There are also no previous obtaining steps, therefore “character string…obtained” is indefinite. These 112(b) rejections are therefore maintained. Third, with respect to the 103 rejections, Applicant argues in p. 7-10 of the remarks that the prior art of record does not teach the following limitations in claim 1: “first extracting…using a single label classification” and “second extracting…in a case where a part of an extracted character string corresponding to a certain item among the plurality of items in the input document image includes a character string corresponding to another item." The Examiner respectfully disagrees. With respect to the “first extracting” limitation, the prior art of record Matiukhov teaches OCR logic of the DDES extracts metadata from the image data of a document in which the metadata specifies sequences of text content items and text content item features associated with each text content item of the sequences of text content items (see Para. 0004). Matiukhov also teaches a trained machine learning model determining, based on the sequences of text content items and the text content item features, one or more text content items associated with a key/label (see Paras. 0004 and 0054). Additionally, Matiukhov teaches outputting a vector of probabilities in which each element of the vector is associated with a key/label and the probability associated with a given element of the vector represents the probability that a particular text content item is associated with the key (see Paras. 0037 and 0058). Matiukhov further teaches each word/character sequence is associated with a key/label (see Para. 0039). The Examiner interprets the probability that a word/character sequence is associated with a particular key/label as a single label classification. With respect to the “second extracting” limitation, Matiukhov teaches that a word sequence corresponds to “Page 1 of 2 Account Number 925685-125 421 8 Billing Date Mar. 22, 2017” (see Para. 0043), sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. “Page”) are input into the second logic of MLL, and the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic (see Para. 0057). The Examiner interprets, for example, “Page 1 of 2” and “1 of 2 Account” as including a character string corresponding to another item. In this case, “Page” and “Account” are items not present in both strings. Additionally, the Examiner interprets, for example, “Page” in “Page 1 of 2” that is output from the MLL as output of the first extracting, and “1” in “1 of 2 Account” that is output from the MLL as the second extracting. Since “1” is not the same as “Page”, the character output from the second extraction is not the same as the character output from the first extraction. Fourth, in response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., character string ranges of a plurality of extraction-target items overlap one another in the task of named entity recognition) are not recited in the rejected claim(s). Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Fifth, Applicant argues in p. 10-11 of the remarks that the 35 U.S.C. 101 rejections should be withdrawn. The Examiner respectfully disagrees. The claims still recite an abstract idea, such as a process that, under its broadest reasonable interpretation, covers performance of the limitations manually or in the mind by a human. Specifically, a human can mentally and manually identify and extract character strings (i.e., words) from a document associated with a specific classification/label (i.e., select the name of a person/company in a document). A person can also mentally and manually identify and extract characters that were not previously identified and extracted. Contrary to Applicant’s remarks, there is no recitation of the technical improvement(s) in the claims (i.e., there is no recitation of the improvements in Paras. 0003 and 0056 of Applicant’s specification). Therefore, the 101 rejections are maintained. Please note, Applicant has not addressed the objection to the title of the invention. Therefore, the specification objection in the Non-final Office Action mailed on 10/10/25 has been maintained. Specification The “Summary of the Invention” should be separate and distinct from the abstract. See section (h) below. Content of Specification (a) TITLE OF THE INVENTION: See 37 CFR 1.72(a) and MPEP § 606. The title of the invention should be placed at the top of the first page of the specification unless the title is provided in an application data sheet. The title of the invention should be brief but technically accurate and descriptive, preferably from two to seven words. It may not contain more than 500 characters. (b) CROSS-REFERENCES TO RELATED APPLICATIONS: See 37 CFR 1.78 and MPEP § 211 et seq. (c) STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT: See MPEP § 310. (d) THE NAMES OF THE PARTIES TO A JOINT RESEARCH AGREEMENT. See 37 CFR 1.71(g). (e) INCORPORATION-BY-REFERENCE OF MATERIAL SUBMITTED ON A READ-ONLY OPTICAL DISC, AS A TEXT FILE OR AN XML FILE VIA THE PATENT ELECTRONIC SYSTEM: The specification is required to include an incorporation-by-reference of electronic documents that are to become part of the permanent United States Patent and Trademark Office records in the file of a patent application. See 37 CFR 1.77(b)(5) and MPEP § 608.05. See also the Legal Framework for Patent Electronic System posted on the USPTO website (https://www.uspto.gov/sites/default/files/documents/2019LegalFrameworkPES.pdf) and MPEP § 502.05 (f) STATEMENT REGARDING PRIOR DISCLOSURES BY THE INVENTOR OR A JOINT INVENTOR. See 35 U.S.C. 102(b) and 37 CFR 1.77. (g) BACKGROUND OF THE INVENTION: See MPEP § 608.01(c). The specification should set forth the Background of the Invention in two parts: (1) Field of the Invention: A statement of the field of art to which the invention pertains. This statement may include a paraphrasing of the applicable U.S. patent classification definitions of the subject matter of the claimed invention. This item may also be titled “Technical Field.” (2) Description of the Related Art including information disclosed under 37 CFR 1.97 and 37 CFR 1.98: A description of the related art known to the applicant and including, if applicable, references to specific related art and problems involved in the prior art which are solved by the applicant’s invention. This item may also be titled “Background Art.” (h) BRIEF SUMMARY OF THE INVENTION: See MPEP § 608.01(d). A brief summary or general statement of the invention as set forth in 37 CFR 1.73. The summary is separate and distinct from the abstract and is directed toward the invention rather than the disclosure as a whole. The summary may point out the advantages of the invention or how it solves problems previously existent in the prior art (and preferably indicated in the Background of the Invention). In chemical cases it should point out in general terms the utility of the invention. If possible, the nature and gist of the invention or the inventive concept should be set forth. Objects of the invention should be treated briefly and only to the extent that they contribute to an understanding of the invention. (i) BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S): See MPEP § 608.01(f). A reference to and brief description of the drawing(s) as set forth in 37 CFR 1.74. (j) DETAILED DESCRIPTION OF THE INVENTION: See MPEP § 608.01(g). A description of the preferred embodiment(s) of the invention as required in 37 CFR 1.71. The description should be as short and specific as is necessary to describe the invention adequately and accurately. Where elements or groups of elements, compounds, and processes, which are conventional and generally widely known in the field of the invention described, and their exact nature or type is not necessary for an understanding and use of the invention by a person skilled in the art, they should not be described in detail. However, where particularly complicated subject matter is involved or where the elements, compounds, or processes may not be commonly or widely known in the field, the specification should refer to another patent or readily available publication which adequately describes the subject matter. (k) CLAIM OR CLAIMS: See 37 CFR 1.75 and MPEP § 608.01(m). The claim or claims must commence on a separate sheet or electronic page (37 CFR 1.52(b)(3)). Where a claim sets forth a plurality of elements or steps, each element or step of the claim should be separated by a line indentation. There may be plural indentations to further segregate subcombinations or related steps. See 37 CFR 1.75 and MPEP 608.01(i) - (p). (l) ABSTRACT OF THE DISCLOSURE: See 37 CFR 1.72 (b) and MPEP § 608.01(b). The abstract is a brief narrative of the disclosure as a whole, as concise as the disclosure permits, in a single paragraph preferably not exceeding 150 words, commencing on a separate sheet following the claims. In an international application which has entered the national stage (37 CFR 1.491(b)), the applicant need not submit an abstract commencing on a separate sheet if an abstract was published with the international application under PCT Article 21. The abstract that appears on the cover page of the pamphlet published by the International Bureau (IB) of the World Intellectual Property Organization (WIPO) is the abstract that will be used by the USPTO. See MPEP § 1893.03(e). (m) SEQUENCE LISTING: See 37 CFR 1.821 - 1.825 and MPEP §§ 2421 - 2431. The requirement for a sequence listing applies to all sequences disclosed in a given application, whether the sequences are claimed or not. See MPEP § 2422.01. The abstract of the disclosure is objected to because: In line 2, “an document image” should read –the document image–. In line 3, “within a document” should read –within the document image–. In lines 3-4, “A character string” should read –The character string–. In line 4, “an input document image” should read –the document image–. In line 5, “the other item” should read –another item–. In line 6, “the first extracting” should read –the extracting–. In line 7, “the certain item” should read –a certain item–. In line 7, “the first extracting” should read –the extracting–. A corrected abstract of the disclosure is required and must be presented on a separate sheet, apart from any other text. See MPEP § 608.01(b). The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. Claim Objections Claim 1 is objected to because of the following informalities: In lines 5-6, “an document image” should read –a document image –. Appropriate correction is required. Claim 4 is objected to because of the following informalities: In line 3, “the other item among the plurality of items” should read –the other item–. “Among the plurality of items” should be removed since it was previously removed in claim 1. In lines 3-4, “a corresponding character string” should read –the corresponding character string–. Appropriate correction is required. Claim 5 is objected to because of the following informalities: In lines 4-5, “the other item among the plurality of items” should read –the other item–. “Among the plurality of items” should be removed since it was previously removed in claim 1. In line 5, “a corresponding character string” should read –the corresponding character string–. Appropriate correction is required. Claim 7 is objected to because of the following informalities: In lines 2-3, “the other item among the plurality of items” should read –the other item–. “Among the plurality of items” should be removed since it was previously removed in claim 1. In line 3, “a corresponding character string” should read –the corresponding character string–. Appropriate correction is required. Claim 8 is objected to because of the following informalities: In line 6, “performing;” should read –performing: –. The semicolon should be changed to a colon. Appropriate correction is required. Claim 10 is objected to because of the following informalities: In line 4, “the other different training model” should read –the different training model –. Appropriate correction is required. Claim 11 is objected to because of the following informalities: In line 3, “the other item among the plurality of items” should read –the other item–. “Among the plurality of items” should be removed since it was previously removed in claim 8. In lines 3-4, “a corresponding character string” should read –the corresponding character string–. Appropriate correction is required. Claim 12 is objected to because of the following informalities: In lines 3-4, “an document image” should read –a document image –. Appropriate correction is required. Claim 13 is objected to because of the following informalities: In lines 4-5, “an document image” should read –a document image –. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “…a character string extractor configured to estimate” in claims 8 and 12. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-13 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 recites the limitation "an input document image" in line 8. It is unclear and indefinite if this is the same as the “document image” previously recited in the claim. Claim 1 recites the limitation “second extracting to extract…obtained by the first extracting” in lines 9-14. It is unclear what this limitation is saying and thus what the scope of the claim is. Claim 1 recites “the other item” in line 12. There is insufficient antecedent basis for this limitation in the claim. Only a “certain item” and an “another item” are previously recited in the claim. Claim 1 recites the limitation “the corresponding character string” in line 12. It is unclear and indefinite which corresponding character string is being referred to (i.e., that of the plurality of items or that of the another item). Claim 1 recites the limitation “the certain item obtained” in lines 13-14. There is insufficient antecedent basis for this limitation in the claim, as there is no previous obtaining step, only extracting steps. Claims 2-7 depend on claim 1 and are therefore also rejected under 112(b). Claim 8 recites the limitation “the plurality of items” in line 8. There is insufficient antecedent basis for this limitation in the claim. Claim 8 recites the limitation "an input document image" in line 8. It is unclear and indefinite if this is the same as the “document image” previously recited in the claim. Claim 8 recites the limitation “second extracting to extract…obtained by the first extracting” in lines 10-15. It is unclear what this limitation is saying and thus what the scope of the claim is. Claim 8 recites the limitation “the other item” in line 13. There is insufficient antecedent basis for this limitation in the claim. Only a “certain item” and an “another item” are previously recited in the claim. Claim 8 recites the limitation “the corresponding character string” in lines 13-14. It is unclear and indefinite which corresponding character string is being referred to (i.e., that of the plurality of items or that of the another item). Claim 8 recites the limitation “the certain item obtained” in line 15. There is insufficient antecedent basis for this limitation in the claim, as there is no previous obtaining step, only extracting steps. Claims 9-11 depend on claim 8 and are therefore also rejected under 112(b). Claim 12 recites the limitation "an input document image" in line 6. It is unclear and indefinite if this is the same as the “document image” previously recited in the claim. Claim 12 recites the limitation “performing second extracting to extract…obtained by the first extracting” in lines 7-12. It is unclear what this limitation is saying and thus what the scope of the claim is. Claim 12 recites the limitation “the other item” in line 10. There is insufficient antecedent basis for this limitation in the claim. Only a “certain item” and an “another item” are previously recited in the claim. Claim 12 recites the limitation “the corresponding character string” in line 10. It is unclear and indefinite which corresponding character string is being referred to (i.e., that of the plurality of items or that of the another item). Claim 12 recites the limitation “the certain item obtained” in line 12. There is insufficient antecedent basis for this limitation in the claim, as there is no previous obtaining step, only extracting steps. Claim 13 recites the limitation "an input document image" in line 7. It is unclear and indefinite if this is the same as the “document image” previously recited in the claim. Claim 13 recites the limitation “performing second extracting to extract…obtained by the first extracting” in lines 8-13. It is unclear what this limitation is saying and thus what the scope of the claim is. Claim 13 recites the limitation “the other item” in line 11. There is insufficient antecedent basis for this limitation in the claim. Only a “certain item” and an “another item” are previously recited in the claim. Claim 13 recites the limitation “the corresponding character string” in line 11. It is unclear and indefinite which corresponding character string is being referred to (i.e., that of the plurality of items or that of the another item). Claim 13 recites the limitation “the certain item obtained” in line 13. There is insufficient antecedent basis for this limitation in the claim, as there is no previous obtaining step, only extracting steps. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea (mental processes) without significantly more. The claims recite an apparatus, system, method, and non-transitory computer readable storage medium for extracting character strings. With respect to the analysis of Claims 12 (claims 1, 8, and 13 recite similar limitations): Step 1: With regard to Step 1, claim 12 is directed to a method, and therefore, the claim is directed to one of the statutory categories of inventions. Step 2A, Prong One: With regard to Step 2A, Prong One, the following limitations in claim 12 (and similarly claims 1, 8, and 12) as drafted recite an abstract idea: “performing first extracting to extract a character string corresponding to each of a plurality of items for an input document image using a single label classification; and performing second extracting to extract, in a case where a part of an extracted character string corresponding to a certain item among the plurality of items in the input document image includes a character string corresponding to another item, a character string corresponding to the other item, for which the corresponding character string is not extracted by the first extracting, from the extracted character string corresponding to the certain item obtained by the first extracting.” The limitations recite abstract ideas, such as a process that, under its broadest reasonable interpretation, covers performance of the limitations manually or in the mind by a human. That is, a human can mentally and manually identify and extract character strings (i.e., words) from a document associated with a specific classification/label (i.e., select the name of a person/company in a document). A person can also mentally and manually identify and extract characters that were not previously identified and extracted. These are concepts that fall under the grouping of abstract idea mental processes, i.e., a concept performed in the human mind, evaluation, judgment, and/or opinion of a human. Step 2A, Prong Two: The 2019 PEG defines the phrase “integration into a practical application” to require an additional element or a combination of additional elements in the claim to apply, rely on, or use the judicial exception. In the instant case, there are no additional steps/elements/limitations in the claims, with the exception of the following in the method claim (claim 12), the apparatus claim (claim 1), the system claim (claim 8), and the non-transitory computer readable storage medium claim (claim 13): “by using a training model functioning as a character string extractor configured to estimate an extraction-target character string included in an document image” in claim 12, “one or more memories storing instructions; and one or more processors executing the instructions to perform: by using a training model functioning as a character string extractor configured to estimate an extraction-target character string included in an document image” in claim 1, “a training device generating a training model functioning as a character string extractor configured to estimate an extraction-target character string included in an document image; an information processing apparatus performing; by using the training model” in claim 8, and a “computer” and “by using a training model functioning as a character string extractor configured to estimate an extraction-target character string included in an document image” in claim 13. The processor, memory, training model, training device, and computer are mere generic computer(s) and/or computer components. The training model is just a generic computer/computer model, as specifics of the training are not recited. These are regarded as adding routine and conventional elements to perform the judicial exception, and do not apply it into a practical application. Accordingly, the above-mentioned additional elements/limitations do not integrate the abstract idea into a practical application; and therefore, the claims recite an abstract idea. Step 2B: Because the claims fail under Step 2A, the claims are further evaluated under Step 2B. The claims herein do not include additional elements that are sufficient to amount to significantly more than the judicial exception, because as discussed above with respect to integration of the abstract idea into practical application, the additional elements/limitations to perform the steps, amount to no more than insignificant routine and conventional elements. Mere instructions to apply an exception using generic components cannot provide an inventive concept. Therefore, claims 1, 8, 12, and 13 are not patent eligible. Furthermore, with regard to claims 2-7 and 9-11 when viewed individually, these additional steps, under their broadest reasonable interpretation, provide extra-solution activities to cover performance of the limitations as an abstract idea, and do not provide meaningful limitations to transform the abstract idea into a patent eligible application of the abstract idea such that the claims amount to significantly more than the abstract idea itself. Accordingly, they are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-13 are rejected under 35 U.S.C. 103 as being unpatentable over Matiukhov (US 2022/0156490 A1). Regarding claim 1, Matiukhov teaches, an information processing apparatus comprising (Para. 0004: document data extraction system (DDES); Fig. 1: Document Data Extraction System 102): one or more memories storing instructions (Para. 0004: DDES includes a memory that sores instruction code); and one or more processors executing the instructions to perform (Para. 0004: instruction code is executable by the processor to perform operations): first extracting to extract, by using a training model functioning as a character string extractor configured to estimate an extraction-target character string included in an document image, a character string corresponding to each of a plurality of items for an input document image using a single label classification (Para. 0004: Image data is associated with a document. OCR logic of the DDES extracts metadata from the image data. The metadata specifies sequences of text content items and text content item features associated with each text content item of the sequences of text content items. A machine learning logic module of the DDES determines, based on the sequences of text content items and the text content item features, one or more text content items associated with a key; Para. 0015: the machine learning logic is trained; Para. 0037: “The output layer 250 is configured to output a vector of probabilities, where each element of the vector is associated with one of a plurality of keys or labels. The probability associated with a given element of the vector represents the probability that a particular text content item is associated with the key that is associated with the element”; Para. 0039: each word/character sequence is associated with a key/label (i.e., name, date, etc.); Para. 0051: each MLL model is associated with a different type of document and each MLL model determine text content items associated with a key; Para. 0053; Para. 0054: the model is trained to associate text content items with the keys/labels specified by the user and the sequence “Page 1 of 2” is input into the first logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058: “the value of each vector corresponds to the probability that the text content item being processed (e.g., "Page" during the first interaction) is associated with the corresponding key. For example, the output will indicate the probability that the term "Page" is associated with the keys "ACCOUNT NUMBER," "TOTAL AMOUNT," and "DATE."; Para. 0059; Table 3; Note: the Examiner interprets the probability that a word/character sequence is associated with a particular key/label as a single label classification); and second extracting to extract, in a case where a part of an extracted character string corresponding to a certain item among the plurality of items in the input document image includes a character string corresponding to another item, a character string corresponding to the other item, for which the corresponding character string is not extracted by the first extracting, from the extracted character string corresponding to the certain item obtained by the first extracting (As shown in Para. 0043, the word sequence corresponds to “Page 1 of 2 Account Number 925685-125 421 8 Billing Date Mar. 22, 2017”; Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0054: sequence “Page 1 of 2” is input into the first logic of the MLL; Para. 0056: text content item features corresponding to features in the columns of Table 1 are input into the second logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058: “the value of each vector corresponds to the probability that the text content item being processed (e.g., "Page" during the first interaction) is associated with the corresponding key. For example, the output will indicate the probability that the term "Page" is associated with the keys "ACCOUNT NUMBER," "TOTAL AMOUNT," and "DATE; Note: As shown in the paragraphs above, the first and second logic of the MLL extract different content information. The Examiner interprets, for example, “Page 1 of 2” and “1 of 2 Account” as including a character string corresponding to another item. In this case, “Page” and “Account” are items not present in both strings. Additionally, the Examiner interprets, for example, “Page” in “Page 1 of 2” that is output from the MLL as output of the first extracting, and “1” in “1 of 2 Account” that is output from the MLL as the second extracting. Since “1” is not the same as “Page”, the character output from the second extraction is not the same as the character output from the first extraction). Matiukhov discloses and teaches the above limitations in different embodiments. It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine the embodiments of a first and second extracting of a trained model, as claimed by known methods, since in combination each element merely performs the same function as it does separately, and the results of the combination were predictable. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to claim 1. Regarding claim 2, Matiukhov teaches the limitations as explained above in claim 1. Matiukhov further teaches, the information processing apparatus according to claim 1 (see claim 1 above), wherein the second extracting is performed by using the training model used for the first extracting, whose input and output are limited (Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0054: The model is trained and sequence “Page 1 of 2” is input into the first logic of the MLL; Para. 0056: text content item features corresponding to features in the columns of Table 1 are input into the second logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058-0059; Note: As shown in these paragraphs, different content information is input into the first and second logic of the MLL and each logic has an output (i.e., the inputs and outputs are specific/limited to each MLL logic)). Regarding claim 3, Matiukhov teaches the limitations as explained above in claim 1. Matiukhov further teaches, the information processing apparatus according to claim 1 (see claim 1 above), wherein the second extracting is performed by using a training model different from the training model used for the first extracting, which is trained to extract a character string corresponding to a second item different from a first item from a character string corresponding to the first item of the plurality of items (Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0054: The model is trained and sequence “Page 1 of 2” is input into the first logic of the MLL; Para. 0056: text content item features corresponding to features in the columns of Table 1 are input into the second logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058-0065; Note: As shown in these paragraphs, different models can be used and different content information is input into the first and second logic of the MLL). Regarding claim 4, Matiukhov teaches the limitations as explained above in claim 1. Matiukhov further teaches, the information processing apparatus according to claim 1 (see claim 1 above), wherein in the second extracting, key-value extracting is performed, to which a keyword and a data type corresponding to the other item among the plurality of items, for which a corresponding character string is not extracted by the first extracting, are set (Para. 0027: During training, text content of one or more areas of the document image are associated with one or more keys and the user of the terminal can select words or combination of words and associate the words or combinations of words with different keys or labels; Para.0030: information specifies the key and a corresponding value that is associated with the one or more text content items associated with the key; Fig. 4A and Para. 0040; Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0052; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058-0065). Regarding claim 5, Matiukhov teaches the limitations as explained above in claim 1. Matiukhov further teaches, the information processing apparatus according to claim 1 (see claim 1 above), wherein the one or more processors further execute the instructions to perform setting an extraction-target item in the second extracting in advance (Para. 0004: instruction code is executable by the processor to perform operations; Para. 0027: During training, text content of one or more areas of the document image are associated with one or more keys and the user of the terminal can select words or combination of words and associate the words or combinations of words with different keys or labels; As shown in Para. 0039, a rectangular box is dragged around the text content item and a key or label associated with the selected text content item is specified; Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0052; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL; Para. 0058-0065) and the second extracting is performed in a case where the other item among the plurality of items, for which a corresponding character string is not extracted by the first extracting, is the extraction-target item set in advance (Para. 0004: instruction code is executable by the processor to perform operations; Para. 0027: During training, text content of one or more areas of the document image are associated with one or more keys and the user of the terminal can select words or combination of words and associate the words or combinations of words with different keys or labels; As shown in Para. 0039, a rectangular box is dragged around the text content item and a key or label associated with the selected text content item is specified; Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0052; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058-0065; Note: As shown in these paragraphs, the first and second logic of the MLL extract different content information). Regarding claim 6, Matiukhov teaches the limitations as explained above in claim 1. Matiukhov further teaches, the information processing apparatus according to claim 1 (see claim 1 above), wherein the one or more processors further execute the instructions to perform causing a display unit to display a UI screen on which results of the first extracting are shown, on the UI screen, a UI element for a user to give instructions to perform the second extracting exists (Para. 0004: instruction code is executable by the processor to perform operations; Para. 0039: rectangular selection box is dragged around the text content item and a key or label is specified for the selected text content item; As shown in Para. 0051-0054, there are multiple models, and it is determined whether a model exists and if not, then keys are specified and associated with values. The user, via an interface of the terminal, can select text content items and specify keys/labels to associate with the text content items; Paras. 0061-0065; Para. 0081: the computer system includes a display which acts as an interface for the user to see processing results produced by the processor; Para. 0082: input device allows for the user to interact with the computer system), and based on user instructions via the UI screen, the second extracting is performed (As shown in Para. 0051-0054, there are multiple models, and it is determined whether a model exists and if not, then keys are specified and associated with values. The user, via an interface of the terminal, can select text content items and specify keys/labels to associate with the text content items; Paras. 0061-0065; Para. 0081: the computer system includes a display which acts as an interface for the user to see processing results produced by the processor; Para. 0082: input device allows for the user to interact with the computer system). Regarding claim 7, Matiukhov teaches the limitations as explained above in claim 6. Matiukhov further teaches, the information processing apparatus according to claim 6 (see claim 6 above), wherein the UI element is displayed on the UI screen in association with the other item among the plurality of items, for which a corresponding character string is not extracted by the first extracting (Para. 0039: rectangular selection box is dragged around the text content item and a key or label is specified for the selected text content item; As shown in Para. 0051-0054, there are multiple models, and it is determined whether a model exists and if not, then keys are specified and associated with values. The user, via an interface of the terminal, can select text content items and specify keys/labels to associate with the text content items; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058-0065; Para. 0081: the computer system includes a display which acts as an interface for the user to see processing results produced by the processor; Para. 0082: input device allows for the user to interact with the computer system) and in the second extracting, a character string corresponding to the other item with which the UI element is associated is extracted (Para. 0039: rectangular selection box is dragged around the text content item and a key or label is specified for the selected text content item; As shown in Para. 0051-0054, there are multiple models, and it is determined whether a model exists and if not, then keys are specified and associated with values. The user, via an interface of the terminal, can select text content items and specify keys/labels to associate with the text content items; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058-0065; Para. 0081: the computer system includes a display which acts as an interface for the user to see processing results produced by the processor; Para. 0082: input device allows for the user to interact with the computer system). Regarding claim 8, Matiukhov teaches, an information processing system comprising (Para. 0004: document data extraction system (DDES)): a training device generating a training model functioning as a character string extractor configured to estimate an extraction-target character string included in a document image (Fig. 1: Document Data Extraction System 102; Para. 0004: Image data is associated with a document. OCR logic of the DDES extracts metadata from the image data. The metadata specifies sequences of text content items and text content item features associated with each text content item of the sequences of text content items. A machine learning logic module of the DDES determines, based on the sequences of text content items and the text content item features, one or more text content items associated with a key; Para. 0015: the machine learning logic is trained using training documents; Para. 0051: each MLL model is associated with a different type of document and each MLL model determine text content items associated with a key; Para. 0053; Paras. 0054-0065); and an information processing apparatus performing; first extracting to extract, by using the training model, a character string corresponding to each of the plurality of items for an input document image using a single label classification (Fig. 1: Document Data Extraction System 102; Para. 0004: Image data is associated with a document. OCR logic of the DDES extracts metadata from the image data. The metadata specifies sequences of text content items and text content item features associated with each text content item of the sequences of text content items. A machine learning logic module of the DDES determines, based on the sequences of text content items and the text content item features, one or more text content items associated with a key; Para. 0015: the machine learning logic is trained; Para. 0037: “The output layer 250 is configured to output a vector of probabilities, where each element of the vector is associated with one of a plurality of keys or labels. The probability associated with a given element of the vector represents the probability that a particular text content item is associated with the key that is associated with the element”; Para. 0039: each word/character sequence is associated with a key/label (i.e., name, date, etc.); Para. 0051: each MLL model is associated with a different type of document and each MLL model determine text content items associated with a key; Para. 0053; Para. 0054: sequence “Page 1 of 2” is input into the first logic of the MLL; ; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058: “the value of each vector corresponds to the probability that the text content item being processed (e.g., "Page" during the first interaction) is associated with the corresponding key. For example, the output will indicate the probability that the term "Page" is associated with the keys "ACCOUNT NUMBER," "TOTAL AMOUNT," and "DATE."; Para. 0059; Table 3; Note: the Examiner interprets the probability that a word/character sequence is associated with a particular key/label as a single label classification) and second extracting to extract, in a case where a part of an extracted character string corresponding to a certain item among the plurality of items in the input document image includes a character string corresponding to another item, a character string corresponding to the other item, for which the corresponding character string is not extracted by the first extracting, from the extracted character string corresponding to the certain item obtained by the first extracting (As shown in Para. 0043, the word sequence corresponds to “Page 1 of 2 Account Number 925685-125 421 8 Billing Date Mar. 22, 2017”; Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0054: sequence “Page 1 of 2” is input into the first logic of the MLL; Para. 0056: text content item features corresponding to features in the columns of Table 1 are input into the second logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058: “the value of each vector corresponds to the probability that the text content item being processed (e.g., "Page" during the first interaction) is associated with the corresponding key. For example, the output will indicate the probability that the term "Page" is associated with the keys "ACCOUNT NUMBER," "TOTAL AMOUNT," and "DATE; Note: As shown in the paragraphs above, the first and second logic of the MLL extract different content information. The Examiner interprets, for example, “Page 1 of 2” and “1 of 2 Account” as including a character string corresponding to another item. In this case, “Page” and “Account” are items not present in both strings. Additionally, the Examiner interprets, for example, “Page” in “Page 1 of 2” that is output from the MLL as output of the first extracting, and “1” in “1 of 2 Account” that is output from the MLL as the second extracting. Since “1” is not the same as “Page”, the character output from the second extraction is not the same as the character output from the first extraction). Matiukhov discloses and teaches the above limitations in different embodiments. It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine the embodiments of a first and second extracting of a trained model, as claimed by known methods, since in combination each element merely performs the same function as it does separately, and the results of the combination were predictable. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to claim 8. Regarding claim 9, Matiukhov teaches the limitations as explained above in claim 8. Matiukhov further teaches, the information processing system according to claim 8 (see claim 8 above), wherein the second extracting is performed by using the training model used for the first extracting, whose input and output are limited (Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0054: The model is trained and sequence “Page 1 of 2” is input into the first logic of the MLL; Para. 0056: text content item features corresponding to features in the columns of Table 1 are input into the second logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058-0059; Note: As shown in these paragraphs, different content information is input into the first and second logic of the MLL and each logic has an output (i.e., the inputs and outputs are specific/limited to each MLL logic)). Regarding claim 10, Matiukhov teaches the limitations as explained above in claim 8. Matiukhov further teaches, the information processing system according to claim 8 (see claim 8 above), wherein the second extracting is performed by using a training model different from the training model used for the first extracting and the training device further generates the other different training model by performing training for extracting a character string corresponding to a second item different from a first item from a character string corresponding to the first item of the plurality of items (Fig. 1: Document Data Extraction System 102; Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0054: The model is trained and sequence “Page 1 of 2” is input into the first logic of the MLL; Para. 0056: text content item features corresponding to features in the columns of Table 1 are input into the second logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058-0065; Note: As shown in these paragraphs, different models can be used and different content information is input into the first and second logic of the MLL). Regarding claim 11, Matiukhov teaches the limitations as explained above in claim 8 (see claim 8 above). Matiukhov further teaches, the information processing system according to claim 8 (see claim 8 above), wherein in the second extracting, key-value extracting is performed, to which a keyword and a data type corresponding to the other item among the plurality of items, for which a corresponding character string is not extracted by the first extracting, are set (Para. 0027: During training, text content of one or more areas of the document image are associated with one or more keys and the user of the terminal can select words or combination of words and associate the words or combinations of words with different keys or labels; Para.0030: information specifies the key and a corresponding value that is associated with the one or more text content items associated with the key; Fig. 4A and Para. 0040; Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0052; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058-0065). Regarding claim 12, Matiukhov teaches, an information processing method comprising the steps of (Para. 0003: method performed by a computing system comprising a document data extraction system (DDES)): performing first extracting to extract, by using a training model functioning as a character string extractor configured to estimate an extraction-target character string included in an document image, a character string corresponding to each of a plurality of items for an input document image using a single label classification (Para. 0004: Image data is associated with a document. OCR logic of the DDES extracts metadata from the image data. The metadata specifies sequences of text content items and text content item features associated with each text content item of the sequences of text content items. A machine learning logic module of the DDES determines, based on the sequences of text content items and the text content item features, one or more text content items associated with a key; Para. 0015: the machine learning logic is trained; Para. 0037: “The output layer 250 is configured to output a vector of probabilities, where each element of the vector is associated with one of a plurality of keys or labels. The probability associated with a given element of the vector represents the probability that a particular text content item is associated with the key that is associated with the element”; Para. 0039: each word/character sequence is associated with a key/label (i.e., name, date, etc.); Para. 0051: each MLL model is associated with a different type of document and each MLL model determine text content items associated with a key; Para. 0053; Para. 0054: the model is trained to associate text content items with the keys/labels specified by the user and the sequence “Page 1 of 2” is input into the first logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058: “the value of each vector corresponds to the probability that the text content item being processed (e.g., "Page" during the first interaction) is associated with the corresponding key. For example, the output will indicate the probability that the term "Page" is associated with the keys "ACCOUNT NUMBER," "TOTAL AMOUNT," and "DATE."; Para. 0059; Table 3; Note: the Examiner interprets the probability that a word/character sequence is associated with a particular key/label as a single label classification); and performing second extracting to extract, in a case where a part of an extracted character string corresponding to a certain item among the plurality of items in the input document image includes a character string corresponding to another item, a character string corresponding to the other item, for which the corresponding character string is not extracted by the first extracting, from the extracted character string corresponding to the certain item obtained by the first extracting (As shown in Para. 0043, the word sequence corresponds to “Page 1 of 2 Account Number 925685-125 421 8 Billing Date Mar. 22, 2017”; Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0054: sequence “Page 1 of 2” is input into the first logic of the MLL; Para. 0056: text content item features corresponding to features in the columns of Table 1 are input into the second logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058: “the value of each vector corresponds to the probability that the text content item being processed (e.g., "Page" during the first interaction) is associated with the corresponding key. For example, the output will indicate the probability that the term "Page" is associated with the keys "ACCOUNT NUMBER," "TOTAL AMOUNT," and "DATE; Note: As shown in the paragraphs above, the first and second logic of the MLL extract different content information. The Examiner interprets, for example, “Page 1 of 2” and “1 of 2 Account” as including a character string corresponding to another item. In this case, “Page” and “Account” are items not present in both strings. Additionally, the Examiner interprets, for example, “Page” in “Page 1 of 2” that is output from the MLL as output of the first extracting, and “1” in “1 of 2 Account” that is output from the MLL as the second extracting. Since “1” is not the same as “Page”, the character output from the second extraction is not the same as the character output from the first extraction). Matiukhov discloses and teaches the above limitations in different embodiments. It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine the embodiments of a first and second extracting of a trained model, as claimed by known methods, since in combination each element merely performs the same function as it does separately, and the results of the combination were predictable. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to claim 12. Regarding claim 13, Matiukhov teaches, a non-transitory computer readable storage medium storing a program for causing a computer to perform an information processing method comprising the steps of (Para. 0005: The non-transitory computer readable medium stores instruction code that is executable by a processor for causing the processor to perform operations that; Fig. 7: computer readable medium 740): performing first extracting to extract, by using a training model functioning as a character string extractor configured to estimate an extraction-target character string included in an document image, a character string corresponding to each of a plurality of items for an input document image using a single label classification (Para. 0004: Image data is associated with a document. OCR logic of the DDES extracts metadata from the image data. The metadata specifies sequences of text content items and text content item features associated with each text content item of the sequences of text content items. A machine learning logic module of the DDES determines, based on the sequences of text content items and the text content item features, one or more text content items associated with a key; Para. 0015: the machine learning logic is trained; Para. 0037: “The output layer 250 is configured to output a vector of probabilities, where each element of the vector is associated with one of a plurality of keys or labels. The probability associated with a given element of the vector represents the probability that a particular text content item is associated with the key that is associated with the element”; Para. 0039: each word/character sequence is associated with a key/label (i.e., name, date, etc.); Para. 0051: each MLL model is associated with a different type of document and each MLL model determine text content items associated with a key; Para. 0053; Para. 0054: the model is trained to associate text content items with the keys/labels specified by the user and the sequence “Page 1 of 2” is input into the first logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058: “the value of each vector corresponds to the probability that the text content item being processed (e.g., "Page" during the first interaction) is associated with the corresponding key. For example, the output will indicate the probability that the term "Page" is associated with the keys "ACCOUNT NUMBER," "TOTAL AMOUNT," and "DATE."; Para. 0059; Table 3; Note: the Examiner interprets the probability that a word/character sequence is associated with a particular key/label as a single label classification); and performing second extracting to extract, in a case where a part of an extracted character string corresponding to a certain item among the plurality of items in the input document image includes a character string corresponding to another item, a character string corresponding to the other item, for which the corresponding character string is not extracted by the first extracting, from the extracted character string corresponding to the certain item obtained by the first extracting (As shown in Para. 0043, the word sequence corresponds to “Page 1 of 2 Account Number 925685-125 421 8 Billing Date Mar. 22, 2017”; Para. 0051: each MLL model is associated with a different type of document and each MLL model determines text content items associated with a key; Para. 0054: sequence “Page 1 of 2” is input into the first logic of the MLL; Para. 0056: text content item features corresponding to features in the columns of Table 1 are input into the second logic of the MLL; As shown in Para. 0057, sequence “Page 1 of 2” is input into the first logic of the MLL and text item features associated with the first text content item (i.e. Page”) are input into the second logic of MLL. Additionally, the sequence “1 of 2 Account” is input into the first logic of the MLL and text features associated with the first context item of the sequence (i.e., “1”) are input into the second logic; Para. 0058: “the value of each vector corresponds to the probability that the text content item being processed (e.g., "Page" during the first interaction) is associated with the corresponding key. For example, the output will indicate the probability that the term "Page" is associated with the keys "ACCOUNT NUMBER," "TOTAL AMOUNT," and "DATE; Note: As shown in the paragraphs above, the first and second logic of the MLL extract different content information. The Examiner interprets, for example, “Page 1 of 2” and “1 of 2 Account” as including a character string corresponding to another item. In this case, “Page” and “Account” are items not present in both strings. Additionally, the Examiner interprets, for example, “Page” in “Page 1 of 2” that is output from the MLL as output of the first extracting, and “1” in “1 of 2 Account” that is output from the MLL as the second extracting. Since “1” is not the same as “Page”, the character output from the second extraction is not the same as the character output from the first extraction). Matiukhov discloses and teaches the above limitations in different embodiments. It would have been obvious, before the effective filing date of the claimed invention, to one of ordinary skill in the art to combine the embodiments of a first and second extracting of a trained model, as claimed by known methods, since in combination each element merely performs the same function as it does separately, and the results of the combination were predictable. It is for at least the aforementioned reasons that the Examiner has reached a conclusion of obviousness with respect to claim 13. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yaramada et al. (US 2022/0121821 A1) Miller et al. (US 2020/0005258 A1) Muraoka et al. (US 2019/0095525 A1) THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to Daniella M. DiGuglielmo whose telephone number is (571)272-0183. The examiner can normally be reached Monday - Friday 8:00 AM - 4:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571)270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Daniella M. DiGuglielmo/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Dec 08, 2023
Application Filed
Oct 07, 2025
Non-Final Rejection — §101, §103, §112
Jan 10, 2026
Response Filed
Mar 13, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586401
SYSTEMS AND METHODS FOR REPRESENTING AND SEARCHING CHARACTERS
2y 5m to grant Granted Mar 24, 2026
Patent 12567228
IMAGE DATA PROCESSING METHOD, IMAGE DATA PROCESSING APPARATUS, AND COMMERCIAL USE
2y 5m to grant Granted Mar 03, 2026
Patent 12567266
IMAGE RECOGNITION SYSTEM AND IMAGE RECOGNITION METHOD
2y 5m to grant Granted Mar 03, 2026
Patent 12555372
IMAGE SENSOR EVALUATION METHOD USING COMPUTING DEVICE INCLUDING PROCESSOR
2y 5m to grant Granted Feb 17, 2026
Patent 12548147
Systems and Methods Related to Age-Related Macular Degeneration
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+26.4%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 170 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month