Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-7, 12, 15 and 19-20 are rejected under 35 U.S.C. 103 as being unpatentable over Ishiguro (USPAP 2011/0097,002), in view of Yoshimura et al. (JP 2006350662 A Translation device and translation method), hereinafter, “Yoshimura”.
Regarding claim 1 Ishiguro teaches, perform image processing on an image to generate a processed image (Please note, paragraph 0035. As indicated an image analysis unit 202 analyzes the input image data); acquire a correct character string for a text area in the processed image (Please note, paragraph 0035. As indicated and thus extracts the components (character strings, photographs, figures, and the like) which constitute the document.); acquire a recognized character string by performing character recognition on the text area in the processed image (Please note, paragraph 0035. As indicated a character recognition unit 203 performs the character recognition process to a character string area extracted by the image analysis unit 202. A layout information generation unit 204 extracts layout information in the character string area extracted by the image analysis unit 202. Here, it should be noted that the layout information includes a font, a size and the like of the character string); perform collation between the correct character string and the recognized character string (Please note, paragraph 0035. As indicated a character recognition correction unit 206 corrects the recognition result by comparing the character string extracted by the image analysis unit 202 with the layout result obtained by the layout unit 205); and display one or more windows reflecting a result of the collation between the correct character string and the recognized character string on a display for allowing a user to evaluate whether the image processing is suitable for character recognition. (Please note, paragraph 0035. As indicated an output unit 207 displays the character recognition result, a user interface, and the like.).
Ishiguro does not expressly recite, wherein the circuitry controls a displaying manner of at least one window of the one or more windows to vary according to the result of the collation.
Yoshimura teaches, the circuitry controls a displaying manner of at least one window of the one or more windows to vary according to the result of the collation. (Please note, page 9, second paragraph. As indicated the display mode control unit 25 compares the original text input to the input unit 12 with the translated text translated by the translation unit 15 for all the print modes selected and operated by the print mode selection operation unit 24.)
Ishiguro & Yoshimura are combinable because they are from the same field of endeavor.
At the time before the effective filing date, it would have been obvious to a person of ordinary skill in the art to utilize wherein circuitry controls a displaying manner of at least one window of the one or more windows to vary according to the result of the collation of Yoshimura in Ishiguro’s invention.
The suggestion/motivation for doing so would have been as indicated on page 9, first paragraph, “The display mode control unit 25 determines the display mode for printing on the paper by the printer 26 based on the detection result of the translation accuracy detection unit 18 and the print mode selected and operated by the print mode selection operation unit 24”.
Therefore, it would have been obvious to combine Yoshimura with Ishiguro to obtain the invention as specified in claim 1.
Regarding claim 2 Ishiguro teaches, wherein the circuitry controls the displaying manner of the at least one window to vary according to the result of the collation by controlling a displaying mode of a predetermined window component of the at least one window to vary according to the result of the collation. (Please note, page 9, second paragraph. As indicated the display mode control unit 25 compares the original text input to the input unit 12 with the translated text translated by the translation unit 15 for all the print modes selected and operated by the print mode selection operation unit 24.)
Regarding claim 3 Ishiguro teaches, wherein the predetermined window component includes a frame indicating the text area displayed on the at least one window as being superimposed on the processed image, and the circuitry controls a displaying mode of the frame to vary according to the result of the collation. (Please note, paragraph 0056. As indicated in a step S6020, the character recognition unit 203 generates the path based on the combination pattern of the character regions obtained in the step S6010. As described above, the path indicates the pattern of the character cut out from the certain character string. Since it is generally conceived that the plural patterns of the characters are cut out, the plural paths are resultingly generated.).
Regarding claim 4 Ishiguro teaches, wherein the circuitry controls the displaying mode of the frame to vary according to the result of the collation by controlling at least one of a color of a line of the frame, a thickness of the line of the frame, a type of the line of the frame, or a background color within the frame to vary according to the result of the collation. (Please note, figure 6).
Regarding claim 5 Ishiguro teaches, wherein when there are a plurality of text areas that are targets for character recognition, the circuitry displays a plurality of frames indicating the plurality of text areas on one of the one or more windows displaying results of the collation for all of the plurality of text areas being targets for character recognition in the processed image. (Please note, paragraph 0048. As indicated subsequently, the document image analyzing process in the step S3000 of FIG. 4 will be described. In the document image analyzing process, the image analysis unit 202 recognizes meaningful blocks in the image as a lump, and judges the attribute for each block. For example, if the image analysis unit 202 performs the document image analyzing process to the document image illustrated in FIG. 5A, then the document image is divided into blocks such as a text block, a picture block and a photo block as illustrated in FIG. 5B.).
Regarding claim 6 Ishiguro teaches, the predetermined window component includes a window frame surrounding the at least one window, and the circuitry controls a displaying mode of the window frame to vary according to the result of the collation for the text area. (Please note, figure14.).
Regarding claim 7 Ishiguro teaches, wherein the circuitry controls the displaying mode of the window frame to vary according to the result of the collation by controlling at least one of a color of a line of the window frame, a thickness of the line of the window frame, a type of the line of the window frame, or a background color within the window frame to vary according to the result of the collation. (Please note, figure 6.).
Regarding claim 12 Yoshimura teaches, wherein the circuitry controls the displaying manner of the at least one window to vary according to the result of the collation by controlling a display content of a predetermined window component of the at least one window to vary according to the result of the collation. (Please note, page 9, second paragraph. As indicated the display mode control unit 25 compares the original text input to the input unit 12 with the translated text translated by the translation unit 15 for all the print modes selected and operated by the print mode selection operation unit 24.).
Regarding claim 15 Ishiguro teaches, wherein the predetermined window component includes text that is displayed on the at least one window and notifies the user of the result of the collation, and the circuitry controls the display content of the predetermined window component to vary according to the result of the collation by controlling a content of the text to vary according to the result of the collation. (Please note, paragraph 0035. As indicated FIG. 3 is the block diagram illustrating an example of the functional constitution of the character recognition processing apparatus. In the drawing, an input unit 201 accepts user's instructions and inputs of paper-medium documents. The input paper-medium document is then converted into image data. An image analysis unit 202 analyzes the input image data, and thus extracts the components (character strings, photographs, figures, and the like) which constitute the document. A character recognition unit 203 performs the character recognition process to a character string area extracted by the image analysis unit 202. A layout information generation unit 204 extracts layout information in the character string area extracted by the image analysis unit 202. Here, it should be noted that the layout information includes a font, a size and the like of the character string. A layout unit 205 performs a layout process by using the layout information generated by the layout information generation unit 204 and the character (code) recognized by the character recognition unit 203. A character recognition correction unit 206 corrects the recognition result by comparing the character string extracted by the image analysis unit 202 with the layout result obtained by the layout unit 205. An output unit 207 displays the character recognition result, a user interface, and the like.).
Regarding claims 19-20, analysis similar to those presented for claim 1, are applicable.
Allowable Subject Matter
Claims 8-11, 13-14 and 16-18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter: The closest applied Prior Art of record fails to disclose or reasonably suggest wherein when there are a plurality of text areas that are targets for character recognition, the circuitry displays, as the one or more windows, a first window that displays results of the collation for all of the plurality of text areas being targets for character recognition in the processed image and a second window that is displayed in response to processing performed on one of the plurality of text areas in the first window and indicates a result of the collation for the text area on which the processing is performed, and the window frame is a window frame surrounding the second window.
Examiner’s Note
The examiner cites particular figures, paragraphs, columns and line numbers in the references as applied to the claims for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claims, other passages and figures may apply as well.
It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMIR ALAVI whose telephone number is (571)272-7386. The examiner can normally be reached on M-F from 8:00-4:30.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Vu Le can be reached at (571)272-7332. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/AMIR ALAVI/Primary Examiner, Art Unit 2668 Wednesday, January 28, 2026