DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 17 is objected to because of the following informalities: in line one the word “filed” is recited, however this should instead recite “field.” Appropriate correction is required.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1–12 and 14–20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 6 and 13 of U.S. Patent No. 12,175,785 (herein “‘785 patent”) in view of Chen et al., Implementing Document Imaging and Capture Solutions with IBM Datacap, IBM Redbooks, October 2, 2015 (herein “Chen”). Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the ‘785 patent recite most of the limitations of the present application with correspondence to the claims being set forth below.
Regarding claims 1 and 20 of the present application, claim 1 of the ‘785 patent corresponds as follows with deficiencies of claim 1 of the ‘785 patent noted below in curly brackets {}:
Claims 1 and 20 of the present application
Claim 1 of the ‘785 patent
A method comprising: - claim 1
{A system comprising: a processor; and a tangible, non-transitory memory configured to communicate with the processor, the tangible, non-transitory memory having instructions stored thereon that, in response to execution by the processor, cause the processor to perform operations comprising: - claim 20}
A method comprising:
converting, by a processor, a document into an image;
converting, by one or more processors, a document into an image document
detecting, by the processor using {an artificial intelligence engine}, words on the document; searching, by the processor, the words for keywords;
searching, by the one or more processors, words on the image document for keywords
searching, by the processor using {the artificial intelligence engine}, for an object on the document;
determining, by the one or more processors, a type of an object on the image document;
determining, by the processor, an object field based on the keywords and the object;
determining, by the one or more processors, an existence and location of an object field in the image document
creating, by the processor, a tag with metadata about a type of the tag and the object field;
creating, by the one or more processors, a tag with metadata about a type of the tag and the location of the object field
associating, by the processor using {the artificial intelligence engine}, the tag with the object field;
associating, by the one or more processors, the tag with the object field,
and enabling, by the processor using the metadata, interaction with the object field.
enabling, by the one or more processors using the metadata, interaction with the object field
Claim 1 of the ‘785 patent does not explicitly recite where Chen teaches an artificial intelligence engine (Chen page 119 third from last paragraph teaches that the Datacap system “learns” unknown formatted document layouts for users by way of verification as user feedback and interaction, thus using an artificial intelligence).
Regarding claim 20 only, claim 1 of the ‘785 patent does not recite but Chen recites “A system comprising: a processor; and a tangible, non-transitory memory configured to communicate with the processor, the tangible, non-transitory memory having instructions stored thereon that, in response to execution by the processor, cause the processor to perform operations comprising:” (Chen pages 69 and 73, Datacap processing performed on a Datacap Server (processor) including a database server connection (non-transitory memory)).
Therefore taking claim 1 of the ‘785 patent and Chen together as a whole, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified claim 1 to include the Datacap machine learning realizing a type of artificial intelligence executed on a Datacap Server (for claim 20) as disclosed in Chen at least because doing so would allow for processing documents that are unstructured and for which the variation of documents is not controllable. See Chen page 118.
Regarding claim 2, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches further comprising training the artificial intelligence engine using a plurality of documents to learn to identify object fields in the plurality of documents (Chen pages 45–46, Datacap system using IBM Content Classification that learns from the processing of a range of sample documents to perform full-text recognition by processing OCR documents without operator intervention, where recognizing includes bar code recognition to locate and recognize bar codes in an image (identify object fields)). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 3, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches further comprising training the artificial intelligence engine with at least one of participant feedback or participant interaction with the objects on the document (Chen pages 58–59, Datacap providing an interface for users to click on and manually correct low-confidence recognition results of certain fields in a document). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 4, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches further comprising training the artificial intelligence engine using at least one of similarities or differences of a plurality of documents (Chen page 90, learning template used for unstructured documents that are known to have some fields (similarities) but unknown where fields are located (differences), the Datacap learns new document formats when they are processed using the learning template). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 5, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches further comprising determining, by the processor using the artificial intelligence engine, an object type of the object (Chen pages 148–149, a learning template is trained over time to automatically find data through locate rules further taught on pages 159–160 as extracting data field zones of different types). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 6, claim 1 of the ‘785 patent reciting “wherein the keywords include at least one of a name of a participant that needs to sign the image document or notary language” corresponds to the claimed “wherein the keywords include at least one of names of participants that need to sign the document, the participant type, document types, instructional terms or notary language.”
Regarding claim 7, claim 1 of the ‘785 patent reciting “wherein the tag at least one of indicates that the image document requires a notary, displays questions about the image document, displays information about the image document or displays areas on the image document where a signature is required” corresponds exactly to claim 7.
Regarding claim 8, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches wherein the object includes at least one of a geometric shape, line, field, parenthesis or colon (Chen page 17, various objects capable of detection from Datacap including check boxes and bar codes (geometric shapes)). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 9, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches wherein the object field includes at least one of a checkbox, signature field, bubble, circle, shape or symbol (Chen page 17, various objects capable of detection from Datacap including check boxes and bar codes (geometric shapes)). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 10, claim 1 of the ‘785 patent reciting “wherein the metadata includes data about executing the image document at the location in the object field;” corresponds to the limitations of claim 10.
Regarding claim 11, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches further comprising determining, by the processor using an object detection algorithm, the object based on the object type (Chen pages 46–47 bar code recognized (determining) according to the bar code type, where a Code 39 bar code is recognized by a pattern of vertical lines, and a PDF417 bar code is determined by clusters of bars and spaces). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 12, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches wherein the metadata at least one of enables interaction with the document in order to effectuate an electronic transaction, includes data about the object field, or includes a process for executing the document in the object field (Chen page 35, the document hierarchy includes metadata about the fields (data about the object field) present in various portions of a document). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 14, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches wherein the determining the object field includes using an object detection algorithm (Chen page 126, document data objects determined by iterating through various extraction data techniques in order of preference (forming an object detection algorithm), starting with zonal searching, then trying regular expressions, keyword searching and lastly a click n key process), wherein the object detection algorithm uses a determination from the artificial intelligence engine (Chen page 126 learning application used to detect data objects in zones, the learning application (artificial intelligence engine) updated through a learning process through user input clicking on various regions). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 15, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches further comprising generating, by the processor using the artificial intelligence engine, at least one of textual analysis or contextual element analysis (Chen pages 124–126, learning application (using the artificial intelligence engine) learns zone information where information is stored, the zone defining the context/area around which information to be extracted is located). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 16, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches further comprising transmitting, by the processor, the object field to a participant for participant validation (Chen page 240, Datacap navigator includes a user interface where data values of fields, such as a First Name field, are displayed to users (transmitting from a memory to the user interface) for users to validate). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 17, claim 1 of the ‘785 patent reciting “enabling, by the one or more processors using the metadata, interaction with the object field, wherein the enabling converts the object field to an interactive object field to allow the interaction, and wherein the interaction includes the interactive object field being configured to accept electronic data input” corresponds to the limitations of claim 17.
Regarding claim 18, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches further comprising storing, by the processor and in a knowledge database, at least one of a participant validation of the object field, a participant action associated with the object field or a participant change to the object field (Chen page 253, Datacap Navigator including storing field properties in a database to allow for customizing field properties in the user interface (participant change to the object field)). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Regarding claim 19, claim 1 of the ‘785 patent does not explicitly teach but Chen teaches further comprising storing, by the processor and in a knowledge database, at least one of a participant validation of the object field, a participant action associated with the object field or a participant change to the object field in association with at least one of the document, document type or participant account (Chen page 13 figure 1-1 and page 253, Datacap Navigator including storing field properties in a database to allow for customizing field properties in the user interface (participant change to the object field) the fields belonging to a scanned document). The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Claim 13 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 12,175,785 (herein “‘785 patent”) in view of Chen, and further in view of Lee at al., United States Patent No. US 11,361,528 B2 (herein “Lee”).
Regarding claim 13, claim 1 of the ‘785 patent does not explicitly teach while Chen teaches a learning template used to recognize bar codes, and thus teaching recognizing various specific geometric symbols, nonetheless, Chen does not explicitly teach where Lee teaches further comprising recognizing, by the processor using the artificial intelligence engine, elements associated with a notary seal (Lee col. 2, l. 42 – col. 3, l. 12, classification systems implementing machine learning used to recognize stamp types including notary stamps (elements associated with a notary seal)).
The motivation to combine claim 1 of ‘785 with Chen is the same as set forth above regarding claim 1.
Further, taking the teachings of Claim 1 of ‘785 as modified by Chen and Lee together as a whole, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the learning template based object recognition of Chen to include recognizing notary seals as in Lee at least because doing so would more accurately classify and analyze stamps or markings present on printed documents (see Lee col. 4, ll. 8–11).
Claims 1–12 and 14–20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 7–9, 12 and 15–19 of U.S. Patent No. 12,169,976 (herein “‘976 patent”) in view of Chen. Although the claims at issue are not identical, they are not patentably distinct from each other because the claims of the ‘976 patent recite most of the limitations of the present application with correspondence to the claims being set forth below.
Regarding claims 1 and 20 of the present application, claims 1 and 19 of the ‘976 patent corresponds as follows with deficiencies of claim 1 and 19 of the ‘976 patent noted below in curly brackets {}:
Claims 1 and 20 of the present application, claim 1 exemplary
Claims 1 and 19 of the ‘976 patent, claim 1 exemplary
A method comprising: - claim 1
A system comprising: a processor; and a tangible, non-transitory memory configured to communicate with the processor, the tangible, non-transitory memory having instructions stored thereon that, in response to execution by the processor, cause the processor to perform operations comprising: - claim 20
A method comprising: - claim 1
A system comprising: one or more processors; and one or more tangible, non-transitory memories configured to communicate with the one or more processors, the one or more tangible, non-transitory memories having instructions stored thereon that, in response to execution by the one or more processors, cause the one or more processors to perform operations comprising: - Claim 19
converting, by a processor, a document into an image;
converting, by one or more processors, a document into an image document,
detecting, by the processor using {an artificial intelligence engine}, words on the document; searching, by the processor, the words for keywords;
detecting, by the one or more processors, words on the image document; searching, by the one or more processors, the words for keywords
searching, by the processor using {the artificial intelligence engine}, for an object on the document;
searching, by the one or more processors, for an object on the image document;
determining, by the processor, an object field based on the keywords and the object;
determining, by the one or more processors, an existence and location of an object field in the image document, based on the keywords
creating, by the processor, a tag with metadata about a type of the tag and the object field;
creating, by the one or more processors, a tag with metadata about a type of the tag and the object field
associating, by the processor using {the artificial intelligence engine}, the tag with the object field;
associating, by the one or more processors, the tag with the object field
and enabling, by the processor using the metadata, interaction with the object field.
enabling, by the one or more processors using the metadata, interaction with the object field
Claim 1 of the ‘976 patent does not explicitly recite where Chen teaches an artificial intelligence engine (Chen page 119 third from last paragraph teaches that the Datacap system “learns” unknown formatted document layouts for users by way of verification as user feedback and interaction, thus using an artificial intelligence).
Therefore taking claim 1 of the ‘976 patent and Chen together as a whole, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified claim 1 to include the Datacap machine learning realizing a type of artificial intelligence executed as disclosed in Chen at least because doing so would allow for processing documents that are unstructured and for which the variation of documents is not controllable. See Chen page 118.
Regarding claim 2, claim 1 of the ‘976 patent does not explicitly teach but Chen teaches further comprising training the artificial intelligence engine using a plurality of documents to learn to identify object fields in the plurality of documents (Chen pages 45–46, Datacap system using IBM Content Classification that learns from the processing of a range of sample documents to perform full-text recognition by processing OCR documents without operator intervention, where recognizing includes bar code recognition to locate and recognize bar codes in an image (identify object fields)). The motivation to combine claim 1 of ‘976 with Chen is the same as set forth above regarding claim 1.
Regarding claim 3, claim 1 of the ‘976 patent does not explicitly teach but Chen teaches further comprising training the artificial intelligence engine with at least one of participant feedback or participant interaction with the objects on the document (Chen pages 58–59, Datacap providing an interface for users to click on and manually correct low-confidence recognition results of certain fields in a document). The motivation to combine claim 1 of ‘976 with Chen is the same as set forth above regarding claim 1.
Regarding claim 4, claim 1 of the ‘976 patent does not explicitly teach but Chen teaches further comprising training the artificial intelligence engine using at least one of similarities or differences of a plurality of documents (Chen page 90, learning template used for unstructured documents that are known to have some fields (similarities) but unknown where fields are located (differences), the Datacap learns new document formats when they are processed using the learning template). The motivation to combine claim 1 of ‘976 with Chen is the same as set forth above regarding claim 1.
Regarding claim 5, claim 1 of the ‘976 patent does not explicitly teach but Chen teaches further comprising determining, by the processor using the artificial intelligence engine, an object type of the object (Chen pages 148–149, a learning template is trained over time to automatically find data through locate rules further taught on pages 159–160 as extracting data field zones of different types). The motivation to combine claim 1 of ‘976 with Chen is the same as set forth above regarding claim 1.
Regarding claim 6, claim 5 of the ‘976 patent reciting “wherein the keywords further include, the participant type, document types or instructional terms” corresponds to the claimed “wherein the keywords include at least one of names of participants that need to sign the document, the participant type, document types, instructional terms or notary language.”
Regarding claim 7, claim 1 of the ‘976 patent reciting “wherein the tag at least one of indicates that the image document requires a notary, displays questions about the image document, displays information about the image document or displays areas on the image document where a signature is required” corresponds exactly to claim 7.
Regarding claim 8, claim 8 of the ‘976 patent reciting “wherein the object includes at least one of a geometric shape, line, field, parenthesis or colon,” corresponds exactly to claim 8.
Regarding claim 9, claim 9 of the ‘976 patent reciting “wherein the object field includes at least one of a signature field, checkbox, bubble, circle, shape or symbol,” corresponds exactly to claim 9.
Regarding claim 10, claim 1 of the ‘976 patent reciting “wherein the metadata includes data about executing the image document in the object field,” corresponds to the limitations of claim 10.
Regarding claim 11, claim 1 of the ‘976 patent reciting “determining, by the one or more processors, a type of the object on the image document,” and in claim 7 of the ‘976 patent reciting “wherein an object detection algorithm is used in the determining the object fields,” corresponds to the limitation of claim 11.
Regarding claim 12, claim 12 of the ‘976 patent reciting “wherein the metadata at least one of enables interaction with the image document in order to effectuate an electronic transaction, includes data about the object field, or includes a process for executing the image document in the object field,” corresponds to the limitations of claim 12.
Regarding claim 14, claim 1 of the ‘976 patent does not explicitly teach but Chen teaches wherein the determining the object field includes using an object detection algorithm (Chen page 126, document data objects determined by iterating through various extraction data techniques in order of preference (forming an object detection algorithm), starting with zonal searching, then trying regular expressions, keyword searching and lastly a click n key process), wherein the object detection algorithm uses a determination from the artificial intelligence engine (Chen page 126 learning application used to detect data objects in zones, the learning application (artificial intelligence engine) updated through a learning process through user input clicking on various regions). The motivation to combine claim 1 of ‘976 with Chen is the same as set forth above regarding claim 1.
Regarding claim 15, claim 1 of the ‘976 patent does not explicitly teach but Chen teaches further comprising generating, by the processor using the artificial intelligence engine, at least one of textual analysis or contextual element analysis (Chen pages 124–126, learning application (using the artificial intelligence engine) learns zone information where information is stored, the zone defining the context/area around which information to be extracted is located). The motivation to combine claim 1 of ‘976 with Chen is the same as set forth above regarding claim 1.
Regarding claim 16, claim 15 of the ‘976 patent reciting “further comprising transmitting, by the one or more processors, the object field to a participant for participant validation,” corresponds to the limitations of claim 16.
Regarding claim 17, claim 16 of the ‘976 patent reciting “further comprising enabling, by the one or more processors, the object field to accept electronic entries,” corresponds to the limitations of claim 17.
Regarding claim 18, claim 17 of the ‘976 patent reciting “further comprising storing, by the one or more processors and in a knowledge database, at least one of a participant validation of the object field, a participant action associated with the object field or a participant change to the object field,” corresponds to the limitations of claim 18.
Regarding claim 19, claim 18 of the ‘976 patent reciting “further comprising storing, by the one or more processors and in a knowledge database, at least one of a participant validation of the object field, a participant action associated with the object field or a participant change to the object field in association with at least one of the image document, document type or participant account,” corresponds to the limitations of claim 19.
Claim 13 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 12,169,976 (herein “‘976 patent”) in view of Chen, and further in view of Lee at al., United States Patent No. US 11,361,528 B2 (herein “Lee”).
Regarding claim 13, claim 1 of the ‘976 patent does not explicitly teach while Chen teaches a learning template used to recognize bar codes, and thus teaching recognizing various specific geometric symbols, nonetheless, Chen does not explicitly teach where Lee teaches further comprising recognizing, by the processor using the artificial intelligence engine, elements associated with a notary seal (Lee col. 2, l. 42 – col. 3, l. 12, classification systems implementing machine learning used to recognize stamp types including notary stamps (elements associated with a notary seal)).
The motivation to combine claim 1 of ‘976 with Chen is the same as set forth above regarding claim 1.
Further, taking the teachings of Claim 1 of ‘976 as modified by Chen and Lee together as a whole, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the learning template based object recognition of Chen to include recognizing notary seals as in Lee at least because doing so would more accurately classify and analyze stamps or markings present on printed documents (see Lee col. 4, ll. 8–11).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1–12, and 14–20 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al., Implementing Document Imaging and Capture Solutions with IBM Datacap, IBM Redbooks, October 2, 2015 (herein “Chen”).
Regarding claims 1 and 20, with substantive differences between claims 1 and 20 noted in curly brackets {}, and with claim 1 as exemplary, Chen teaches a {method – claim 1, system – claim 20} comprising (Chen page 6, section 1.3, Datacap processing including production-level digitization, data extraction, verification, indexing and exporting of documents to back-end systems) { a processor; and a tangible, non-transitory memory configured to communicate with the processor, the tangible, non-transitory memory having instructions stored thereon that, in response to execution by the processor, cause the processor to perform operations comprising: - claim 20 only (Chen pages 69 and 73, Datacap processing performed on a Datacap Server (processor) including a database server connection (non-transitory memory))}:
converting, by a processor, a document into an image (Chen pages 10–11, and 13, section 1.4.1 Precommittal process section, fig. 1.1, datacap workflow including scanning and image processing of a document, and separation and classification of images, where a claim document is faxed or scanned in a field office or captured on an iPhone or iPad);
detecting, by the processor using an artificial intelligence engine, words on the document (Chen page 15, documents (their images/scans) are ingested into FileNet Content Manager where rulerunner actions are started to extract additional data, where page 47 teaches the Datacap process including optical character recognition to recognize characters and then assemble characters into words, where page 90, section 4.2.2, and page 119 third from last paragraph teaches that the Datacap system “learns” unknown formatted document layouts for users by way of verification as user feedback and interaction, thus using an artificial intelligence, given the BRI of “artificial intelligence” in view of the supporting passages in the specification, which do not detail “artificial intelligence” beyond machine learning generally);
searching, by the processor, the words for keywords (Chen page 50, lists of keywords can be formed and datacap provides a text search for keywords function);
searching, by the processor using the artificial intelligence engine, for an object on the document (Chen page 125, Datacap recognizing areas in document, by searching for the areas proximate to a keyword);
determining, by the processor, an object field based on the keywords and the object (Chen page 125, finding values for a particular text label based on keywords corresponding to the text label, for example finding the keyword Insurance and looking to the right or below the keyword to find actual desired data to extract (object field));
creating, by the processor, a tag with metadata about a type of the tag and the object field (Chen, page 11, searchable metadata is extracted during the precommittal processing phase, with page 77 teaching the metadata being indexed in an XML file, and pages 210–211, fig. 9–9 showing the result of document metadata placed in XML tags including information about the object field and type of the tag (for example Author tag));
associating, by the processor using the artificial intelligence engine, the tag with the object field (Chen page 212, parent-child relationships are used to designate associations in the XML structure between the tags and object fields in the document); and
enabling, by the processor using the metadata, interaction with the object field (Chen page 30, Datacap triggers verification and validation by a human operator of the partially recognized document data when confidence in the data accuracy is below a set level, where page 37 teaches a human operator validating field values (object field), where page 240 teaches the fields for human review being edited in the Datacap Navigator listing field details from a Batch structure files, shown on page 324 as including the metadata for the document).
While Chen teaches that one system performs all of the converting a document to an image and processing a document image as given above in the rejection rationale, Chen does not explicitly teach that the method is performed by just one/the same processor, as claimed. For example, Figure 1-1 of Chen teaches use of a scanner or mobile phone to perform document conversion, but other processing performed outside of the mobile phone and capturing device. Further, Chen on page 33 suggests that some of the functionality can be integrated on one device using a Datacap mobile capture feature, and thus, full integration of the scanning features as well as the document image processing features onto one device with one processor would be just making integral into one processor the operations taught by Chen. Therefore, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the various functions taught in Chen cited above to be performed within one processor at least because doing so would simply be making the functionality integral into one processor. See MPEP 2144.04(V)(B).
Regarding claim 2, Chen teaches further comprising training the artificial intelligence engine using a plurality of documents to learn to identify object fields in the plurality of documents (Chen pages 45–46, Datacap system using IBM Content Classification that learns from the processing of a range of sample documents to perform full-text recognition by processing OCR documents without operator intervention, where recognizing includes bar code recognition to locate and recognize bar codes in an image (identify object fields)).
Regarding claim 3, Chen teaches further comprising training the artificial intelligence engine with at least one of participant feedback or participant interaction with the objects on the document (Chen pages 58–59, Datacap providing an interface for users to click on and manually correct low-confidence recognition results of certain fields in a document).
Regarding claim 4, Chen teaches further comprising training the artificial intelligence engine using at least one of similarities or differences of a plurality of documents (Chen page 90, learning template used for unstructured documents that are known to have some fields (similarities) but unknown where fields are located (differences), the Datacap learns new document formats when they are processed using the learning template).
Regarding claim 5, Chen teaches further comprising determining, by the processor using the artificial intelligence engine, an object type of the object (Chen pages 148–149, a learning template is trained over time to automatically find data through locate rules further taught on pages 159–160 as extracting data field zones of different types).
Regarding claim 6, Chen teaches wherein the keywords include at least one of names of participants that need to sign the document, the participant type, document types, instructional terms or notary language (Chen page 45, the type of document can be unequivocally determined by a keyword search).
Regarding claim 7, Chen teaches wherein the tag at least one of indicates that the document requires a notary, displays questions about the document, displays information about the document or display areas on the document where a signature is required (Chen page 96, XML documents including specification of the document type, where page 157 shows the XML on display including the information about the document type and other document information).
Regarding claim 8, Chen teaches wherein the object includes at least one of a geometric shape, line, field, parenthesis or colon (Chen page 17, various objects capable of detection from Datacap including check boxes and bar codes (geometric shapes)).
Regarding claim 9, Chen teaches wherein the object field includes at least one of a checkbox, signature field, bubble, circle, shape or symbol (Chen page 17, various objects capable of detection from Datacap including check boxes and bar codes (geometric shapes)).
Regarding claim 10, Chen teaches wherein the metadata includes data about executing the document in the object field (Chen pages 30. 35 and 91, optical mark recognition identifying a signature on a form, which is included in the document hierarchy, and where the document hierarchy includes metadata about the fields present in various portions of a document).
Regarding claim 11, Chen teaches further comprising determining, by the processor using an object detection algorithm, the object based on the object type (Chen pages 46–47 bar code recognized (determining) according to the bar code type, where a Code 39 bar code is recognized by a pattern of vertical lines, and a PDF417 bar code is determined by clusters of bars and spaces).
Regarding claim 12, Chen teaches wherein the metadata at least one of enables interaction with the document in order to effectuate an electronic transaction, includes data about the object field, or includes a process for executing the document in the object field (Chen page 35, the document hierarchy includes metadata about the fields (data about the object field) present in various portions of a document).
Regarding claim 14, Chen teaches wherein the determining the object field includes using an object detection algorithm (Chen page 126, document data objects determined by iterating through various extraction data techniques in order of preference (forming an object detection algorithm), starting with zonal searching, then trying regular expressions, keyword searching and lastly a click n key process), wherein the object detection algorithm uses a determination from the artificial intelligence engine (Chen page 126 learning application used to detect data objects in zones, the learning application (artificial intelligence engine) updated through a learning process through user input clicking on various regions).
Regarding claim 15, Chen teaches further comprising generating, by the processor using the artificial intelligence engine, at least one of textual analysis or contextual element analysis (Chen pages 124–126, learning application (using the artificial intelligence engine) learns zone information where information is stored, the zone defining the context/area around which information to be extracted is located).
Regarding claim 16, Chen teaches further comprising transmitting, by the processor, the object field to a participant for participant validation (Chen page 240, Datacap navigator includes a user interface where data values of fields, such as a First Name field, are displayed to users (transmitting from a memory to the user interface) for users to validate).
Regarding claim 17, Chen teaches further comprising enabling, by the processor, the object filed to accept electronic entries (Chen page 240, Datacap navigator providing a user interface on a computer screen (electronic) to allow users to enter values for fields).
Regarding claim 18, Chen teaches further comprising storing, by the processor and in a knowledge database, at least one of a participant validation of the object field, a participant action associated with the object field or a participant change to the object field (Chen page 253, Datacap Navigator including storing field properties in a database to allow for customizing field properties in the user interface (participant change to the object field)).
Regarding claim 19, Chen teaches further comprising storing, by the processor and in a knowledge database, at least one of a participant validation of the object field, a participant action associated with the object field or a participant change to the object field in association with at least one of the document, document type or participant account (Chen page 13 figure 1-1 and page 253, Datacap Navigator including storing field properties in a database to allow for customizing field properties in the user interface (participant change to the object field) the fields belonging to a scanned document).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Chen further in view of Lee at al., United States Patent No. US 11,361,528 B2 (herein “Lee”).
Regarding claim 13, while Chen teaches a learning template used to recognize bar codes, and thus teaching recognizing various specific geometric symbols, nonetheless, Chen does not explicitly teach where Lee teaches further comprising recognizing, by the processor using the artificial intelligence engine, elements associated with a notary seal (Lee col. 2, l. 42 – col. 3, l. 12, classification systems implementing machine learning used to recognize stamp types including notary stamps (elements associated with a notary seal)).
Therefore, taking the teachings of Chen and Lee together as a whole, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified the learning template based object recognition of Chen to include recognizing notary seals as in Lee at least because doing so would more accurately classify and analyze stamps or markings present on printed documents (see Lee col. 4, ll. 8–11).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
King et al., US Patent No. 8,447,111 B2, directed towards processing text captured from rendered documents.
Paruchuri et al., US Patent Application Publication No. US 2022/0156300 A1, directed towards converting documents to images for entity extraction and processing the documents based on a matching template and image alignment of the document image.
Maze et al., US Patent Application Publication No. US 2010/0325102 A1, directed towards document acquisition software to acquire and parse a document to create a set of parsed data that is stored in a database.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE M KOETH whose telephone number is (571)272-5908. The examiner can normally be reached Monday-Thursday, 09:00-17:00, Friday 09:00-13:00, EDT/EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vincent Rudolph can be reached at 571-272-8243. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
MICHELLE M. KOETH
Primary Examiner
Art Unit 2671
/MICHELLE M KOETH/Primary Examiner, Art Unit 2671