DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments filed on December 29, 2025 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
35 U.S.C. 101 rejections for claims 1-26 have been withdrawn according to the amendment and the arguments in the Remarks received on December 29, 2025.
Response to Amendment
The amendment to the claims received on December 29, 2025 has been entered.
The amendment of claims 1-8 and 11 is acknowledged.
The cancelation of claims 9, 10 and 12-26.
CLAIM INTERPRETATION
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a reception unit” in claim 1, “a reception unit” in claim 2, and “a reception unit” in claim 6.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
“a reception unit” in claim 1 is read as the processing unit (Fig.1, item 13) which is a CPU (paragraph 56), “a reception unit” in claim 2 is read as the processing unit (Fig.1, item 13) which is a CPU (paragraph 56), and “a reception unit” in claim 6 is read as the processing unit (Fig.1, item 13) which is a CPU (paragraph 56).
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Examiner Note
The limitation in claim 1, “obtain a verification image by imaging a term having the appearance frequency lower than or equal to a threshold value, in the plurality of second terms, the term being likely to be an error included in the designated document; obtain similarity degrees between the verification image and each of the plurality of comparison images, and wherein display device is configured to present display the term likely to be the error in the designated document with the first term represented by at least the comparison image with the highest similarity degree” is considered as additional elements that are sufficient to amount to significantly more than the judicial exception according to the improvement disclosed in paragraphs 69 and 101 of the filed specification.
The limitation in claim 2, “obtain a verification image by imaging a term having the appearance frequency lower than or equal to a first threshold value in the plurality of second terms, the term being likely to be an error included in the designated document; and infer a substituted term represented by the verification image for the term likely to be the error included in the designated document using an image determination model, and wherein the display device is configured to display the term likely to be the error included in the designated document with the substituted term” is considered as additional elements that are sufficient to amount to significantly more than the judicial exception according to the improvement disclosed in paragraphs 69 and 101 of the filed specification.
The limitation in claim 6, “obtain a verification image by imaging a term having the appearance frequency lower than or equal to a first threshold value in the plurality of second terms, the term being likely to be an error included in the designated document; and infer a substituted term represented by the verification image for the term likely to be the error included in the designated document using an image determination model, and wherein the display device is configured to display the term likely to be the error included in the designated document with the substituted term.” is considered as additional elements that are sufficient to amount to significantly more than the judicial exception according to the improvement disclosed in paragraphs 69 and 101 of the filed specification.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over Wang’859 (US 2012/0072859), and further in view of JPH09190506A and Fujiwara’332 (US 2015/0213332).
With respect to claim 1, Wang’859 teaches a system for proofreading a document (paragraph 52) comprising:
a reception unit [the original electronic document is being received in step 220 shown in Fig.6a). Therefore, a reception is considered being disclosed to receive the original document];
a processing unit [a system is inherent disclosed with at least one processor to perform its desired function (paragraph 52)]; and
a display device [as shown in Fig.11, a window screen is provided to enable a user to perform any desired operation. Therefore, a display device is inherent disclosed to provide the window screen shown in Fig.11 to the user.],
wherein the reception unit is configured to receive a comparison document [the reference electronic document is being received in step 220 shown in Fig.6a)] and a designated document [the original electronic document is being received in step 220 shown in Fig.6a)],
wherein the processing unit is configured to:
divide a sentence included in the comparison document into a plurality of first terms [regarding to the document segmentation (paragraph 51)];
obtain a plurality of comparison images by imaging the plurality of first terms [regarding to the document segmentation (paragraph 51)];
divide a sentence included in the designated document into a plurality of second terms [regarding to the document segmentation (paragraph 51)];
Wang’859 does not teach obtain an appearance frequency of each of the plurality of second terms in the comparison document, document; obtain a verification image by imaging a term having the appearance frequency lower than or equal to a threshold value, in the plurality of second terms, the term being likely to be an error included in the designated document; obtain similarity degrees between the verification image and each of the plurality of comparison images, and wherein display device is configured to present display the term likely to be the error in the designated document with the first term represented by at least the comparison image with the highest similarity degree.
JPH09190506A teaches obtain an appearance frequency of each of the plurality of second terms in the comparison document [Measuring means for measuring the appearance rate of the characters recognized by the recognizing means and the character having the low appearance rate (second terms) is being searched and then being corrected with desired character data (pages 1 and 2)];
obtain a verification image by imaging a term having the appearance frequency lower than or equal to a threshold value, in the plurality of second terms, the term being likely to be an error included in the designated document [Measuring means for measuring the appearance rate of the characters recognized by the recognizing means according to the accumulated contents (pages 1-3) and a replacement unit that replaces the character having the low appearance rate (the second term) searched by the search unit with the corrected character data (pages 1 and 2). Therefore, the character having the low appearance rate (the second term) is considered being verified as the character likely to be an error included in a document];
obtain similarity degrees between the verification image and each of the plurality of comparison images [Measuring means for measuring the appearance rate of the characters recognized by the recognizing means according to the accumulated contents (pages 1-3) and a replacement unit that replaces the character having the low appearance rate (the second term) searched by the search unit with the corrected character data (pages 1 and 2). Therefore, the similarity degrees between the verification image and each of the plurality of comparison images is considered being obtained in order to replace the character having the low appearance rate].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang’859 according to the teaching of JPH09190506A to replace the character having the low appearance rate in the original electronic document with the corrected character data associated with the reference electronic document because this will allow the original electronic document to be corrected more effectively.
The combination of Wang’859 and JPH09190506A does not teach wherein display device is configured to present display the term likely to be the error in the designated document with the first term represented by at least the comparison image with the highest similarity degree.
Fujiwara’332 teaches wherein display device is configured to present display the term likely to be the error in the designated document with the first term represented by at least the comparison image with the highest similarity degree [when the old image and new image are being overlapped with each other and then to display them on a display, both of the highest similarity degree contents and the lowest similarity degrees contents in the new image are considered being presented (paragraph 44)]
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859 and JPH09190506A to according to the teaching of Fujiwara’332 to overlap the original electronic document and the reference electronic document to determine the characters are having the highest similarity degree in the original electronic document and the lowest similarity degrees in the original electronic document because this will allow the original electronic document to be verified if any error character is included more effectively.
Claims 2 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Wang’859 (US 2012/0072859), and further in view of JPH09190506A, Hong’799 (US 5,764,799) and Fujiwara’332 (US 2015/0213332).
With respect to claim 2, Wang’859 teaches a proofreading system for proofreading a document (paragraph 52) comprising:
a reception unit [the original electronic document is being received in step 220 shown in Fig.6a). Therefore, a reception is considered being disclosed to receive the original document];
a processing unit [a system is inherent disclosed with at least one processor to perform its desired function (paragraph 52)]; and
a display device [as shown in Fig.11, a window screen is provided to enable a user to perform any desired operation. Therefore, a display device is inherent disclosed to provide the window screen shown in Fig.11 to the user.],
wherein the reception unit is configured to receive a comparison document [the reference electronic document is being received in step 220 shown in Fig.6a)] and a designated document [the original electronic document is being received in step 220 shown in Fig.6a)],
wherein the processing unit is configured to:
divide a sentence included in the comparison document into a plurality of first terms [regarding to the document segmentation (paragraph 51)];
obtain a plurality of comparison images by imaging the plurality of first terms [regarding to the document segmentation (paragraph 51)];
divide a sentence included in the designated document into a plurality of second terms [regarding to the document segmentation (paragraph 51)];
obtain an appearance frequency of each of the plurality of second terms in the comparison document;
Wang’859 does not teach obtain an appearance frequency of each of the plurality of second terms in the comparison document; obtain a verification image by imaging a term having the appearance frequency lower than or equal to a first threshold value, in the plurality of second terms, the term being likely to be an error included in the designated document; obtain similarity degrees between the verification image and each of the plurality of comparison images; and obtain a probability that the first term represented by the comparison image with the similarity degree greater than or equal to a second threshold value is substituted for the term likely to be the error included in the designated document, and wherein the display device is configured to display the term likely to be the error in the designated document with the first term with the highest probability.
JPH09190506A teaches obtain an appearance frequency of each of the plurality of second terms in the comparison document [Measuring means for measuring the appearance rate of the characters recognized by the recognizing means and the character having the low appearance rate (second terms) is being searched and then being corrected with desired character data (pages 1 and 2)]; and
obtain a verification image by imaging a term having the appearance frequency lower than or equal to a first threshold value, in the plurality of second terms, the term being likely to be an error included in the designated document [Measuring means for measuring the appearance rate of the characters recognized by the recognizing means according to the accumulated contents (pages 1-3) and a replacement unit that replaces the character having the low appearance rate (the second term) searched by the search unit with the corrected character data (pages 1 and 2). Therefore, the character having the low appearance rate (the second term) is considered being verified as the character likely to be an error included in a document].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang’859 according to the teaching of JPH09190506A to replace the character having the low appearance rate in the original electronic document with the corrected character data associated with the reference electronic document because this will allow the original electronic document to be corrected more effectively.
The combination of Wang’859 and JPH09190506A does not teach obtain a probability that the first term represented by the comparison image with the similarity degree greater than or equal to a second threshold value is substituted for the term likely to be the error included in the designated document, and wherein the display device is configured to display the term likely to be the error in the designated document with the first term with the highest probability.
Hong’799 teaches obtain a probability that the first term represented by the comparison image with the similarity degree greater than or equal to a second threshold value is substituted for the term likely to be the error included in the designated document [The OCR will determine the probability for each path through the generated lattice and will select the path with the highest probability and thereby output a word corresponding to the path with the highest probability (col.3, lines 13-17). Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to recognize to obtain a probability that the generated lattice represented by the comparison image with the similarity degree greater than or equal to a threshold value for substituting a word in a designated document which is likely to be the error because this will allow the character to be recognized more effectively and correctly].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859 and JPH09190506A according to the teaching of Hong’799 to utilize the OCR operation to substitute a word in a designated document because this will allow the characters in the designated document to be corrected effectively.
The combination of Wang’859, JPH09190506A and Hong’799 does not teach wherein the display device is configured to display the term likely to be the error in the designated document with the first term with the highest probability.
Fujiwara’332 teaches wherein the display device is configured to display the term likely to be the error in the designated document with the first term with the highest probability [when the old image and new image are being overlapped with each other and then to display them on a display, both of the highest probability of contents have error and the lowest probability of contents have error in the new image are considered being presented (paragraph 44)].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859, JPH09190506A and Hong’799 to according to the teaching of Fujiwara’332 to overlap the original electronic document and the reference electronic document to determine the characters having the highest probability error in the original electronic document and the characters having the lowest probability error in the original electronic document because this will allow the original electronic document to be verified if any error character is included more effectively.
With respect to claim 6, Wang’859 teaches a system (paragraph 52) for proofreading a document comprising:
a reception unit [the original electronic document is being received in step 220 shown in Fig.6a). Therefore, a reception is considered being disclosed to receive the original document];
a processing unit [a system is inherent disclosed with at least one processor to perform its desired function (paragraph 52)]; and
a display device [as shown in Fig.11, a window screen is provided to enable a user to perform any desired operation. Therefore, a display device is inherent disclosed to provide the window screen shown in Fig.11 to the user.],
wherein the reception unit is configured to receive a comparison document [the reference electronic document is being received in step 220 shown in Fig.6a)] and a designated document [the original electronic document is being received in step 220 shown in Fig.6a)],
wherein the processing unit is configured to:
divide a sentence included in the comparison document into a plurality of first terms [regarding to the document segmentation (paragraph 51)];
obtain a plurality of comparison images by imaging the plurality of first terms [regarding to the document segmentation (paragraph 51)];
divide a sentence included in the designated document into a plurality of second terms [regarding to the document segmentation (paragraph 51)];
Wang’859 does not teach obtain an appearance frequency of each of the plurality of second terms in the comparison document; obtain a verification image by imaging a term having the appearance frequency lower than or equal to a first threshold value in the plurality of second terms, the term being likely to be an error included in the designated document; and infer a substituted term represented by the verification image for the term likely to be the error included in the designated document using an image determination model, and wherein the display device is configured to display the term likely to be the error included in the designated document with the substituted term.
JPH09190506A teaches obtain an appearance frequency of each of the plurality of second terms in the comparison document [Measuring means for measuring the appearance rate of the characters recognized by the recognizing means and the character having the low appearance rate (second terms) is being searched and then being corrected with desired character data (pages 1 and 2)];
obtain a verification image by imaging a term having the appearance frequency lower than or equal to a first threshold value in the plurality of second terms, the term being likely to be an error included in the designated document [Measuring means for measuring the appearance rate of the characters recognized by the recognizing means according to the accumulated contents (pages 1-3) and a replacement unit that replaces the character having the low appearance rate (the second term) searched by the search unit with the corrected character data (pages 1 and 2). Therefore, the character having the low appearance rate (the second term) is considered being verified as the character likely to be an error included in a document];
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention of Wang’859 according to the teaching of JPH09190506A to replace the character having the low appearance rate in the original electronic document with the corrected character data associated with the reference electronic document because this will allow the original electronic document to be corrected more effectively.
The combination of Wang’859 and JPH09190506A does not teach infer a substituted term represented by the verification image for the term likely to be the error included in the designated document using an image determination model, and wherein the display device is configured to display the term likely to be the error included in the designated document with the substituted term.
Hong’799 teaches infer a substituted term represented by the verification image for the term likely to be the error included in the designated document using an image determination model [The OCR will determine the probability for each path through the generated lattice and will select the path with the highest probability and thereby output a word corresponding to the path with the highest probability (col.3, lines 13-17). Therefore, The OCR is considered having a model arithmetic unit has a function of inferring a term represented by the verification image in order to output a desired word
to replace a word in a designated document which is likely to be the error because this will allow the character to be recognized more effectively and correctly]
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859 and JPH09190506A according to the teaching of Hong’799 to utilize the OCR operation to substitute a word in a designated document because this will allow the characters in the designated document to be corrected effectively.
The combination of Wang’859, JPH09190506A and Hong’799 does not teach wherein the display device is configured to display the term likely to be the error included in the designated document with the substituted term.
Fujiwara’332 teaches wherein the display device is configured to display the term likely to be the error included in the designated document with the substituted term [when the old image and new image are being overlapped with each other and then to display them on a display, the contents likely to be the error included in the designated document with the substituted content are considered being presented (paragraph 44)]
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859, JPH09190506A and Hong’799 to according to the teaching of Fujiwara’332 to overlap the original electronic document and the reference electronic document to determine the characters having the highest probability error in the original electronic document and the characters having the lowest probability error in the original electronic document because this will allow the original electronic document to be verified if any error character is included more effectively.
Claims 3-5, 7, 8 and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Wang’859 (US 2012/0072859), JPH09190506A, Hong’799 (US 5,764,799), Fujiwara’332 (US 2015/0213332) and further in view of Stark’001 (US 2019/0385001).
With respect to claim 3, which further limits claim 2, the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 does not teach wherein the probability is obtained by using a machine learning model.
Stark’001 teaches wherein the probability is obtained by using a machine learning model [a neural network is being used to output the recognized characters according to the generated probabilities (paragraph 3)].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 according to the teaching of Stark’001 to use a neural network to perform character recognition because this will allow characters on an image to be recognized and outputted more effectively.
With respect to claim 4, which further limits claim 3, the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 does not teach wherein the machine learning model is learned using the comparison document group.
Stark’001 teaches wherein the machine learning model is learned using the comparison document group [a neural network is being used to output the recognized characters according to the generated probabilities by comparing to the temples (document group) (paragraph3 and 18)].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 according to the teaching of Stark’001 to use a neural network to perform character recognition because this will allow characters on an image to be recognized and outputted more effectively
With respect to claim 5, which further limits claim 3, the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 does not teach wherein the machine learning model is a neural network model.
Stark’001 teaches wherein the machine learning model is a neural network model [a neural network is being used to output the recognized characters according to the generated probabilities (paragraph 3)].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 according to the teaching of Stark’001 to use a neural network to perform character recognition because this will allow characters on an image to be recognized and outputted more effectively
With respect to claim 7, which further limits claim 6, the combination of combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 does not teach wherein the image determination model is a machine learning model.
Stark’001 teaches wherein the image determination model is a machine learning model [a neural network is being used to output the recognized characters according to the generated probabilities (paragraph 3)].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 according to the teaching of Stark’001 to use a neural network to perform character recognition because this will allow characters on an image to be recognized and outputted more effectively.
With respect to claim 8, which further limits claim 6, the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 does not teach wherein the machine learning model is learned using the comparison image group.
Stark’001 teaches wherein the machine learning model is learned using the comparison document group [a neural network is being used to output the recognized characters according to the generated probabilities by comparing to the temples (image group) (paragraph3 and 18)].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 according to the teaching of Stark’001 to use a neural network to perform character recognition because this will allow characters on an image to be recognized and outputted more effectively
With respect to claim 11, which further limits claim 7, the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 does not teach wherein the machine learning model is a neural network model.
Stark’001 teaches wherein the machine learning model is a neural network model [a neural network is being used to output the recognized characters according to the generated probabilities (paragraph 3)].
Therefore, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the combination of Wang’859, JPH09190506A, Hong’799 and Fujiwara’332 according to the teaching of Stark’001 to use a neural network to perform character recognition because this will allow characters on an image to be recognized and outputted more effectively.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUO LONG CHEN whose telephone number is (571)270-3759. The examiner can normally be reached on M-F 9am - 5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tieu, Benny can be reached on (571) 272-7490. The fax phone number for the organization where this application or proceeding is assigned is (571) 273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUO LONG CHEN/ Primary Examiner, Art Unit 2682