DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 13, 2026 has been entered.
Response to Arguments
Applicant’s arguments filed on February 13, 2026 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Response to Amendment
The amendment to the claims received on February 13, 2026 has been entered.
The amendment of claims 1, 5 and 12 is acknowledged
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 2 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Busila’512 (US 2021/0357512), and further in view of Eisen’162 (US 2020/0380162).
With respect to claim 1, Busila’512 teaches a computer-implemented method (abstract) comprising:
detecting, by an optical character recognition (OCR) component, at least a first field of contiguous text in a first image and a second field of contiguous text in the first image (paragraphs 31 and 33);
determining a first alpha-numeric text string in the first field (paragraphs 30, 31 and 43);
determining a second alpha-numeric text string in the second field, wherein at least one of the first alpha-numeric text string and the second alpha-numeric text string comprises personally-identifiable information (PII) (paragraphs 30, 31, 39 and 43);
receiving, from a first annotator, first label data for the first sub-image [clusters can be classified as an address cluster (e.g. an address block in an invoice or statement), a document title/type cluster (e.g. a block indicating an invoice or a statement), a data cluster, an item description cluster (e.g. a description of an item itemized in an invoice, receipt, or statement), a signature cluster, a name cluster (e.g. a salutation block in a letter or an “attention” block in a statement/invoice), a logo cluster, a notation cluster (e.g. a handwritten note block in an annotated document), and email address cluster (i.e. a cluster containing an email address) (paragraph 43). Each cluster is considered as annotator.];
receiving, from a second annotator, second label data for the second sub-image clusters can be classified as an address cluster (e.g. an address block in an invoice or statement), a document title/type cluster (e.g. a block indicating an invoice or a statement), a data cluster, an item description cluster (e.g. a description of an item itemized in an invoice, receipt, or statement), a signature cluster, a name cluster (e.g. a salutation block in a letter or an “attention” block in a statement/invoice), a logo cluster, a notation cluster (e.g. a handwritten note block in an annotated document), and email address cluster (i.e. a cluster containing an email address) (paragraph 43). Each cluster is considered as annotator.];
generating second image data, wherein the second image data represents a background of the first image with the first alpha-numeric text string removed from the first field and the second alpha-numeric text string removed from the second field [the replacement data generator generates suitable replacement data for the sensitive data found in the document (paragraph 48). Analysis of the sensitive data would result in not just the type of data of the sensitive data but would also result in an identification of the visual and/or textual characteristics of the sensitive data (e.g. font, font size, character size, character spacing, color, character pitch, background color, foreground color, etc., etc.) (paragraph 48)]; and
generating modified second image data by populating the first field in the second image data with a first randomized alpha-numeric text string and the second field in the second image data with a second randomized alpha-numeric text string (paragraphs 35 and 48).
Busila’512 does not teach generating, by fragmenting the first image, a first sub-image of the first alpha-numeric text string; generating, by fragmenting the first image, a second sub-image of the second alpha-numeric text string.
Eisen’162 teaches generating, by fragmenting the first image, a first sub-image of the first alpha-numeric text string (claim 11, Fig.4, Fig.8 and Fig.9);
generating, by fragmenting the first image, a second sub-image of the second alpha-numeric text string (claim 11, Fig.4, Fig.8 and Fig.9).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Busila’512 according to the teaching of Eisen’162 to Eisen’162 to redact the sensitive information by encrypting, blurring, masking, or inserting a line over all or just a portion of the image because this will allow the sensitive information in a document to be protected.
With respect to claim 2, which further limits claim 1, Busila’512 teaches generating second image data, wherein the second image data represents a background of the first image with the first alpha-numeric text string removed from the first field and the second alpha-numeric text string removed from the second field [the sensitive information is being replaced with the generated replacement data including a block (generate second image data) that blocks the sensitive information (paragraphs 35, 39, 43 and 60).]; and
receiving a first bounding box annotation from the first annotator, the first bounding box annotation representing a text field present in the first image undetected by the OCR component [the sensitive information is being replaced with the generated replacement data including a block (generate second image data) that blocks the sensitive information (paragraphs 35, 39, 43 and 60). When sensitive information is being blocked with a block, the sensitive information is considered not able to be detected by the OCR component].
With respect to claim 4, which further limits claim 1, Busila’512 teaches receiving a first bounding box annotation from the first annotator for the second image data, the first bounding box annotation defining an attribute type for the first field (paragraph 35).
Claims 5-10, 12-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Busila’512 (US 2021/0357512), and further in view of Eisen’162 (US 2020/0380162) and Liu’537 (US 2021/0326537).
With respect to claim 13, Busila’512 teaches a system (paragraph 65) comprising:
at least one processor [a computer system is inherent disclosed with a processor to perform its desired function (paragraph 65)];
and non-transitory computer-readable memory storing instructions that, when executed by the at least one processor (paragraph 65), are effective to cause the at least one processor to:
detect a first field of text in a first image and a second field of text in the first image (paragraphs 31 and 33);
determine a first alpha-numeric text string in the first field (paragraphs 30, 31 and 43);
determine a second alpha-numeric text string in the second field (paragraphs 30, 31, 39 and 43);
generate second image data representing a background of the first image with text removed field [the replacement data generator generates suitable replacement data for the sensitive data found in the document (paragraph 48). Analysis of the sensitive data would result in not just the type of data of the sensitive data but would also result in an identification of the visual and/or textual characteristics of the sensitive data (e.g. font, font size, character size, character spacing, color, character pitch, background color, foreground color, etc., etc.) (paragraph 48)];
generate third image data based at least in part on inserting a third alpha-numeric text string in the first field in the second image data and inserting a fourth alpha-numeric text string in the second field in the second image data (paragraph 35);
Busila’512 does not teach generating, by fragmenting the first image, a first sub-image of the first alpha-numeric text string; generating, by fragmenting the first image, a second sub-image of the second alpha-numeric text string; send the first sub-image to a first computing device for annotation; send the second sub-image to a second computing device for annotation; determine first bounding box data representing a location of the first field in the second image data; determine second bounding box data representing a location of the second field in the second image data.
Eisen’162 teaches generating, by fragmenting the first image, a first sub-image of the first alpha-numeric text string (claim 11, Fig.4, Fig.8 and Fig.9);
generating, by fragmenting the first image, a second sub-image of the second alpha-numeric text string (claim 11, Fig.4, Fig.8 and Fig.9).
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Busila’512 according to the teaching of Eisen’162 to redact the sensitive information by encrypting, blurring, masking, or inserting a line over all or just a portion of the image because this will allow the sensitive information in a document to be protected.
The combination of Busila’512 and Eisen’162 does not teach a second sub-image of the second alpha-numeric text string; send the first sub-image to a first computing device for annotation; send the second sub-image to a second computing device for annotation; determine first bounding box data representing a location of the first field in the second image data; determine second bounding box data representing a location of the second field in the second image data;
Liu’537 teaches send the first sub-image to a first computing device for annotation [the individual section is being send to one of a plurality of untrusted translation engines for translation (Fig.7, step 707 and paragraph 106). The translation is considered as annotation];
send the second sub-image to a second computing device for annotation [the individual section is being send to one of a plurality of untrusted translation engines for translation (Fig.7, step 707 and paragraph 106). The translation is considered as annotation];
determine first bounding box data representing a location of the first field in the second image data [the location of the content in the text is determined (paragraph 112), it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to recognize to use the location determination technique to determine location of bounding box data in the image data because this will allow the contents in the image data to be located more effectively]:
determine second bounding box data representing a location of the second field in the second image data [the location of the content in the text is determined (paragraph 112), it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to recognize to use the location determination technique to determine location of bounding box data in the image data because this will allow the contents in the image data to be located more effectively]:
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Busila’512 and Eisen’162 according to the teaching of Liu’537 to include translation engine for translate the sensitive information because this will allow the desired translation for a document to be provided.
With respect to claim 15, which further limits claim 13, the combination of Busila’512, Eisen’162 and Liu’537 does not teach the non-transitory computer-readable memory storing further instructions that, when executed by the at least one processor, are further effective to cause the at least one processor to receive, from the first computing device, a first annotation representing an undetected text field in the second image data.
Since Busila’512 has suggested the sensitive information is being replaced with the generated replacement data including a block (generate second image data) that blocks the sensitive information (paragraphs 35, 39, 43 and 60) and Liu’537 has suggest that the individual section is being send to one of a plurality of untrusted translation engines for translation (Fig.7, step 707 and paragraph 106) and the translation is considered as annotation, therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include multiple translation engines having the ability to perform translation for sub-images associated with a document image such that sub-image having the sensitive information in the document image are being transmitted to a desired translation engine and the said desired translation replaces it with a translated data and then to return it back (the non-transitory computer-readable memory storing further instructions that, when executed by the at least one processor, are further effective to cause the at least one processor to receive, from the first computing device, a first annotation representing an undetected text field in the second image data) because this will enhance the security of a document image.
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Busila’512, Eisen’162 and Liu’537 to include multiple translation engines having the ability to perform translation for sub-images associated with a document image such that sub-image having the sensitive information in the document image are being transmitted to a desired translation engine and the said desired translation replaces it with a translated data and then to return it back (the non-transitory computer-readable memory storing further instructions that, when executed by the at least one processor, are further effective to cause the at least one processor to receive, from the first computing device, a first annotation representing an undetected text field in the second image data) because this will enhance the security of a document image.
With respect to claim 16, which further limits claim 13, Busila’512 teaches wherein: the third alpha-numeric text string comprise pseudo-random characters (paragraph 36); and
the fourth alpha-numeric text string comprises pseudo-random characters (paragraph 36).
With respect to claim 17, which further limits claim 13, Berker’737 teaches the non-transitory computer-readable memory storing further instructions that, when executed by the at least one processor, are further effective to cause the at least one processor to:
save the third image data with the third alpha-numeric text string in the first field and the fourth alpha-numeric text string in the second field as third image data in the non-transitory computer-readable memory, wherein the third image data represents formatting of the first image with different text strings replacing text in detected fields (paragraph 36).
With respect to claim 18, which further limits claim 17, the combination of Busila’512, Eisen’162 and Liu’537 does not teach the non-transitory computer-readable memory storing further instructions that, when executed by the at least one processor, are further effective to cause the at least one processor to: send the third image data to the first computing device; receive, from the first computing device, a first bounding box annotation for the first field of the third image data, the first bounding box annotation labeled with a first attribute type of the first field; and receive, from the first computing device, a second bounding box annotation for the second field of the third image data, the second bounding box annotation labeled with a second attribute type of the second field.
Since Busila’512 has suggested the sensitive information is being replaced with the generated replacement data including a block (generate second image data) that blocks the sensitive information (paragraphs 35, 39, 43 and 60) and Liu’537 has suggest that the individual section is being send to one of a plurality of untrusted translation engines for translation (Fig.7, step 707 and paragraph 106) and the translation is considered as annotation, therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to include multiple translation engines having the ability to perform translation for sub-images associated with a document image such that sub-image having the sensitive information in the document image are being transmitted to a desired translation engine and the said desired translation replaces it with a translated data and then to return it back (the non-transitory computer-readable memory storing further instructions that, when executed by the at least one processor, are further effective to cause the at least one processor to: send the third image data to the first computing device; receive, from the first computing device, a first bounding box annotation for the first field of the third image data, the first bounding box annotation labeled with a first attribute type of the first field; and receive, from the first computing device, a second bounding box annotation for the second field of the third image data, the second bounding box annotation labeled with a second attribute type of the second field) because this will enhance the security of a document image.
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Busila’512, Eisen’162 and Liu’537 to include multiple translation engines having the ability to perform translation for sub-images associated with a document image such that sub-image having the sensitive information in the document image are being transmitted to a desired translation engine and the said desired translation replaces it with a translated data and then to return it back (the non-transitory computer-readable memory storing further instructions that, when executed by the at least one processor, are further effective to cause the at least one processor to: send the third image data to the first computing device; receive, from the first computing device, a first bounding box annotation for the first field of the third image data, the first bounding box annotation labeled with a first attribute type of the first field; and receive, from the first computing device, a second bounding box annotation for the second field of the third image data, the second bounding box annotation labeled with a second attribute type of the second field) because this will enhance the security of a document image.
With respect to claim 20, which further limits claim 13, Busila’512 teaches the non-transitory computer-readable memory storing further instructions that, when executed by the at least one processor, are further effective to cause the at least one processor to: determine geometric data representing a location of the first field of text in the first image (paragraph 41); and
generate second image data representing a background of the first image with text removed, wherein the location of the first field of text in the second image data is determined using the geometric data (paragraphs 41 and 60).
With respect to claims 5, 7-10 and 12, they are a method claims that claim how the system of claim 13, 15-18 and 20 to annotate the sensitive information in a document image. Claims 5, 7-10 and 12 are obvious in view of Busila’512, Eisen’162 and Liu’537 because the claimed combination operates at the same manner as described in the rejected claims 13, 15-18 and 20. In addition, the reference has disclosed a system to annotate the sensitive information in a document image, the process (method) to annotate the sensitive information in a document image is inherent disclosed to be performed by a processor in the system when the system performs the operation to annotate the sensitive information in a document image.
Claims 11 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Busila’512 (US 2021/0357512), Eisen’162 (US 2020/0380162), Liu’537 (US 2021/0326537) and further in view of Berker’737 (US 2021/0397737).
With respect to claim 19, which further limits claim 13, the combination of Busila’512, Eisen’162 and Liu’537 does not teach wherein the first alpha-numeric text string in the first sub-image comprises less than a total amount of text present in the first field of text in the first image.
Berker’737 teaches wherein the first alpha-numeric text string in the first sub-image comprises less than a total amount of text present in the first field of text in the first image [as shown in Fig.4, the alpha-numeric text string in the document image being boxed according to the desired conditions. Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to recognize to boxed more or less the first alpha-numeric text string the document image is all deepened on a user’s preference and needs since the first alpha-numeric text string the document image is being boxed according to the given conditions].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Busila’512, Eisen’162 and Liu’537 according to the teaching of Berker’737 to box a desired number the alpha-numeric text string of sensitive information in the document image for replacement because this will allow the sensitive information in the document image to be blocked more effectively.
With respect to claim 11, it is a method claim that claims how the system of claim 19 to annotate the sensitive information in a document image. Claim 11 is obvious in view of Busila’512, Eisen’162, Liu’537 and Berker’737 because the claimed combination operates at the same manner as described in the rejected claim 19. In addition, the reference has disclosed a system to annotate the sensitive information in a document image, the process (method) to annotate the sensitive information in a document image is inherent disclosed to be performed by a processor in the system when the system performs the operation to annotate the sensitive information in a document image.
Claim 21 is rejected under 35 U.S.C. 103 as being unpatentable over Busila’512 (US 2021/0357512), Eisen’162 (US 2020/0380162) and further in view of Berker’737 (US 2021/0397737).
With respect to claim 21, which further claim 1, the combination of Busila’512 and Eisen’162 further comprising training a first computer vision model based at least in part by: generating a first prediction by the first computer vision model for the first sub-image; comparing the first prediction to the first label data; generating a second prediction by the first computer vision model for the second sub- image; and comparing the second prediction to the second label data.
Berker’737 teaches generating a first prediction by the first computer vision model for the first sub-image [the sensitive data (the first sub-image) in the document image is being tagged, blocked off, and then replaced and the user provides feedback regarding to the processed result associated the sensitive data (Fig.3, paragraphs 34-37)];
comparing the first prediction to the first label data [the sensitive data in the document image is being tagged, blocked off, and then replaced and the user provides feedback regarding to the processed result for the sensitive data (Fig.3, paragraphs 34-37). When the user provides feedback regarding to the processed result for the sensitive data, the user is considered to perform the comparing for the processed result associated with the sensitive data];
generating a second prediction by the first computer vision model for the second sub-image [the sensitive data (the second sub-image) in the document image is being tagged, blocked off, and then replaced and the user provides feedback regarding to the processed result associated the sensitive data (Fig.3, paragraphs 34-37)]; and
comparing the second prediction to the second label data [the sensitive data in the document image is being tagged, blocked off, and then replaced and the user provides feedback regarding to the processed result for the sensitive data (Fig.3, paragraphs 34-37). When the user provides feedback regarding to the processed result for the sensitive data, the user is considered to perform the comparing for the processed result associated with the sensitive data].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Busila’512 and Eisen’162 according to the teaching of Berker’737 to train a computer vision model for blocking the sensitive information on a document because this will allow the sensitive information in the document image to be blocked more effectively.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUO LONG CHEN whose telephone number is (571)270-3759. The examiner can normally be reached on M-F 9am - 5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tieu, Benny can be reached on (571) 272-7490. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUO LONG CHEN/Primary Examiner, Art Unit 2682