Prosecution Insights
Last updated: April 19, 2026
Application No. 18/633,611

SCANNING SYSTEM, NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM STORING SCANNING PROGRAM, AND METHOD FOR PRODUCING OUTPUT MATTER

Non-Final OA §103§DP
Filed
Apr 12, 2024
Examiner
ZAK, JACQUELINE ROSE
Art Unit
2666
Tech Center
2600 — Communications
Assignee
Seiko Epson Corporation
OA Round
1 (Non-Final)
67%
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant
55%
With Interview

Examiner Intelligence

Grants 67% — above average
67%
Career Allow Rate
8 granted / 12 resolved
+4.7% vs TC avg
Minimal -11% lift
Without
With
+-11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
46 currently pending
Career history
58
Total Applications
across all art units

Statute-Specific Performance

§101
5.7%
-34.3% vs TC avg
§103
56.3%
+16.3% vs TC avg
§102
21.1%
-18.9% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 12 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Status Claims 1-8 are pending for examination in the application filed 04/12/2024. Priority Acknowledgement is made of Applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d). The certified copy has been filed in parent application JP2023-066295 filed on 04/14/2023. Information Disclosure Statement The information disclosure statements (IDS) submitted on 04/12/2024, 05/09/2025, and 10/23/2025 have been considered by the examiner. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto- processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-3 and 5-6 are provisionally rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 5, 9-10, and 12 of copending Application No. 18/633,620 (reference application). This is a provisional nonstatutory double patenting rejection because the patentably indistinct claims have not in fact been patented. Although the claims at issue are not identical, they are not patentably distinct from each other as will be described in reference to the table below. Emphasis has been added in bold to the elements which are identical; and elements which are similar but not identical have been italicized. Reference Application 18/633,620 Current Application 18/633,611 Claim 1: A scanning system comprising: Claim 1: a receiving section that receives a specified word and a processing setting from a user Claim 1: a scanner that performs scanning to read an image Claim 1: receives a specified word and a processing setting from a user and stores the received specified word and the received processing setting to a nonvolatile storage medium such that the specified word is associated with the processing setting Claim 1: a determining section that performs character recognition on image data indicating the read image to recognize a character string Claim 9: the determining section determines whether a candidate character string that is acquired as a result of the recognition and is accurate Claim 12: wherein a plurality of different specified words are stored in the storage medium, the determining section acquires the candidate character strings Claim 1: A scanning system comprising: a receiving section that receives a scan setting and a scan instruction; a scanner that causes an image sensor to operate and performs scanning to read an image in accordance with the scan setting in response to the reception of the scan setting and the scan instruction; and a determining section that performs character recognition on image data indicating the read image to recognize a character string, and outputs a candidate character string that is a candidate for the recognized character string, wherein the determining section outputs a plurality of candidate character strings that are candidates for the recognized character string according to the scan setting. Claim 9: wherein the scan setting includes a setting for a scanning resolution, the determining section determines whether a candidate character string that is acquired as a result of the recognition and is accurate with a probability equal to or higher than a threshold matches the specified word, and the threshold when the scanning is performed at a first resolution is lower than the threshold when the scanning is performed at a second resolution higher than the first resolution. Claim 12: wherein a plurality of different specified words are stored in the storage medium, the determining section acquires the candidate character strings Claim 2: wherein the scan setting includes a setting for a scanning resolution, the determining section outputs a candidate character string that is among the candidate character strings obtained as a result of the recognition and is accurate with a probability equal to or higher than a threshold, and the threshold when the scanning is performed at a first resolution is lower than the threshold when the scanning is performed at a second resolution that is higher than the first resolution. Claim 1: a receiving section that receives a specified word and a processing setting Claim 10: wherein the scan setting includes a setting for a scanning resolution, the determining section determines that a candidate character string acquired as a result of the recognition matches the specified word when a number of characters that are included in the candidate character string and do not match the specified word is equal to or smaller than a predetermined number of characters, and the predetermined number of characters when the scanning is performed at a first resolution is larger than the predetermined number of characters when the scanning is performed at a second resolution higher than the first resolution. Claim 3: wherein the receiving section receives a setting for a specified word, the scan setting includes a setting for a scanning resolution, the determining section outputs the candidate character string such that a number of characters that are included in the candidate character string output by the determining section and do not match the specified word is equal to or smaller than a predetermined number of characters, and the predetermined number of characters when the scanning is performed at a first resolution is larger than the predetermined number of characters when the scanning is performed at a second resolution that is higher than the first resolution. Claim 1: a determining section that… determines whether the specified word read from the storage medium is included in the recognized character string Claim 1: a processing section that performs processing on the image data with the processing setting associated with the specified word when the specified word is included in the image data Claim 12: wherein a plurality of different specified words are stored in the storage medium, the determining section acquires the candidate character strings Claim 5: wherein the determining section determines whether the specified word matches any of the candidate character strings, and the scanning system further comprises a processing section that performs processing corresponding to the specified word when the specified word matches any of the candidate character strings. Claim 5: wherein the receiving section receives units in which the image data is divided into different files Claim 6: wherein the receiving section receives specifying of units in which the image data is divided into different files. Claim 1 is provisionally rejected on the grounds of nonstatutory double patenting as being unpatentable over claims 1, 9, and 12 of co-pending application 18/633,620. Regarding claim 1, A scanning system comprising: a receiving section that receives a scan setting and a scan instruction; a scanner that causes an image sensor to operate and performs scanning to read an image in accordance with the scan setting in response to the reception of the scan setting and the scan instruction; and a determining section that performs character recognition on image data indicating the read image to recognize a character string, and outputs a candidate character string that is a candidate for the recognized character string, wherein the determining section outputs a plurality of candidate character strings that are candidates for the recognized character string according to the scan setting, as disclosed in the current application provides the same functionality of claims 1, 9, and 12 of the reference application 18/633,620. Claims 1, 9, and 12 of the reference application include the limitations: A scanning system comprising: a receiving section that receives a specified word and a processing setting from a user; a scanner that performs scanning to read an image; receives a specified word and a processing setting from a user and stores the received specified word and the received processing setting to a nonvolatile storage medium such that the specified word is associated with the processing setting; a determining section that performs character recognition on image data indicating the read image to recognize a character string; the determining section determines whether a candidate character string that is acquired as a result of the recognition and is accurate; wherein a plurality of different specified words are stored in the storage medium, the determining section acquires the candidate character strings, as detailed in the table. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the scanning system of the current application to be the same as the scanning system of the reference application through the acquisition of a plurality of candidate character strings, as claimed in the dependent claims of the reference application, to produce known results with a reasonable expectation for success. Claim 2 is provisionally rejected on the grounds of nonstatutory double patenting as being unpatentable over claims 9 and 12 of co-pending application 18/633,620. Regarding claim 2, wherein the scan setting includes a setting for a scanning resolution, the determining section outputs a candidate character string that is among the candidate character strings obtained as a result of the recognition and is accurate with a probability equal to or higher than a threshold, and the threshold when the scanning is performed at a first resolution is lower than the threshold when the scanning is performed at a second resolution that is higher than the first resolution, as disclosed in the current application provides the same functionality of claims 9 and 12 of the reference application 18/633,620. Claims 9 and 12 of the reference application includes the limitations: wherein the scan setting includes a setting for a scanning resolution, the determining section determines whether a candidate character string that is acquired as a result of the recognition and is accurate with a probability equal to or higher than a threshold matches the specified word, and the threshold when the scanning is performed at a first resolution is lower than the threshold when the scanning is performed at a second resolution higher than the first resolution; wherein a plurality of different specified words are stored in the storage medium, the determining section acquires the candidate character strings. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the scanning system of the current application to be the same as the scanning system of the reference application. Claim 3 is provisionally rejected on the grounds of nonstatutory double patenting as being unpatentable over claims 1 and 10 of co-pending application 18/633,620. Regarding claim 3, wherein the receiving section receives a setting for a specified word, the scan setting includes a setting for a scanning resolution, the determining section outputs the candidate character string such that a number of characters that are included in the candidate character string output by the determining section and do not match the specified word is equal to or smaller than a predetermined number of characters, and the predetermined number of characters when the scanning is performed at a first resolution is larger than the predetermined number of characters when the scanning is performed at a second resolution that is higher than the first resolution, as disclosed in the current application provides the same functionality of claims 1 and 10 of the reference application 18/633,620. Claims 1 and 10 of the reference application include the limitations: a receiving section that receives a specified word and a processing setting; wherein the scan setting includes a setting for a scanning resolution, the determining section determines that a candidate character string acquired as a result of the recognition matches the specified word when a number of characters that are included in the candidate character string and do not match the specified word is equal to or smaller than a predetermined number of characters, and the predetermined number of characters when the scanning is performed at a first resolution is larger than the predetermined number of characters when the scanning is performed at a second resolution higher than the first resolution. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the scanning system of the current application to be the same as the scanning system of the reference application. Claim 5 is provisionally rejected on the grounds of nonstatutory double patenting as being unpatentable over claims 1 and 12 of co-pending application 18/633,620. Regarding claim 5, wherein the determining section determines whether the specified word matches any of the candidate character strings, and the scanning system further comprises a processing section that performs processing corresponding to the specified word when the specified word matches any of the candidate character strings, as disclosed in the current application provides the same functionality of claims 1 and 12 of the reference application 18/633,620. Claims 1 and 12 of the reference application includes the limitations: a determining section that… determines whether the specified word read from the storage medium is included in the recognized character string; a processing section that performs processing on the image data with the processing setting associated with the specified word when the specified word is included in the image data; wherein a plurality of different specified words are stored in the storage medium, the determining section acquires the candidate character strings Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the scanning system of the current application to be the same as the scanning system of the reference application. Claim 6 is provisionally rejected on the grounds of nonstatutory double patenting as being unpatentable over claim 5 of co-pending application 18/633,620. Regarding claim 6, wherein the receiving section receives specifying of units in which the image data is divided into different files, as disclosed in the current application provides the same functionality of claim 5 of the reference application 18/633,620. Claim 5 of the reference application includes the limitations: wherein the receiving section receives units in which the image data is divided into different files. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention for the scanning system of the current application to be the same as the scanning system of the reference application. Claim Objections Claim 4 is objected to because of the following informalities: “…when the scanning is performed when the scanning is performed with a color setting…”. Appropriate correction is required. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier, as explained in MPEP §2181, subsection I (note that the list of generic placeholders below is not exhaustive, and other generic placeholders may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph): A. The Claim Limitation Uses the Term “Means” or “Step” or a Generic Placeholder (A Term That Is Simply A Substitute for “Means”) With respect to the first prong of this analysis, a claim element that does not include the term “means” or “step” triggers a rebuttable presumption that 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, does not apply. When the claim limitation does not use the term “means,” examiners should determine whether the presumption that 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, paragraph 6 does not apply is overcome. The presumption may be overcome if the claim limitation uses a generic placeholder (a term that is simply a substitute for the term “means”). The following is a list of non-structural generic placeholders that may invoke 35 U.S.C. 112(f) or pre- AIA 35 U.S.C. 112, paragraph 6: “mechanism for,” “module for,” “device for,” “unit for,” “component for,” “element for,” “member for,” “apparatus for,” “machine for,” or “system for.” Welker Bearing Co., v. PHD, Inc., 550 F.3d 1090, 1096, 89 USPQ2d 1289, 1293-94 (Fed. Cir. 2008); Massachusetts Inst. of Tech. v. Abacus Software, 462 F.3d 1344, 1354, 80 USPQ2d 1225, 1228 (Fed. Cir. 2006); Personalized Media,161 F.3d at 704, 48 USPQ2d at 1886–87; Mas- Hamilton Group v. LaGard, Inc., 156 F.3d 1206, 1214-1215, 48 USPQ2d 1010, 1017 (Fed. Cir.1998). This list is not exhaustive, and other generic placeholders may invoke 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, paragraph 6. Such claim limitation(s) is/are in independent claim 1 and dependent claims 2-6: A scanning system comprising ([0017] FIG. 1 is a block diagram illustrating a configuration of a multifunction peripheral 1 as a scanning system according to an embodiment of the present disclosure): a receiving section a determining section and a processing section Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 4, and 7-8 are rejected under 35 U.S.C. 103 as being unpatentable over Omuro (US20210289086A1) in view of Thukkaram (US12424009B1). Regarding claim 1, Omuro teaches a scanning system comprising ([0002] The present disclosure relates to a scanning system, a non-transitory computer-readable storage medium storing a program, and a method for generating scan data in the scanning system. [0023] The control program 13a is firmware for the multi-function printer 1 to execute various processes. The CPU 11 executes a scanning process described later (refer to FIGS. 8 and 9) and a copying process, a printing process, and the like on the basis of the control program 13a): a receiving section that receives a scan setting and a scan instruction ([0035] The touch panel 14a presents various types of information to the user as well as receives various types of operations from the user. For example, the touch panel 14a displays a read-aloud-setting screen DA (refer to, for example, FIG. 4) described later and receives read aloud settings set by the user. [0053] In the character option group 22, any one or more among options “large character size”, “decorative character”, “colored character”, and “date”. [0064] When the first setting completion button 32 is selected, the multi-function printer 1 causes the read-aloud-setting values that have been set on the read-aloud-setting screen DA to be stored in the setting value storage area 13e, completing the read aloud settings. [0076] the multi-function printer 1 also speaks the explanation character string 72 including characters indicating a date according to a selection result of the character option group 22 on the third read-aloud-setting screen DA3 (refer to FIG. 6)); a scanner that causes an image sensor to operate and performs scanning to read an image in accordance with the scan setting in response to the reception of the scan setting and the scan instruction ([0043] The generating section 110 generates scan data by scanning the original document 50 with the image reading mechanism 16); and a determining section that performs character recognition on image data indicating the read image to recognize a character string, and outputs a candidate character string that is a candidate for the recognized character string ([0031] The character data storage area 13c stores character data used for an OCR process. The CPU 11 extracts characters corresponding to character data stored in the character data storage area 13c by performing the OCR process and causes the extracted characters to be spoken, as a word corresponding to an image recognition result, from the speaker circuit 17. [0062] The phrase “characters indicating a date” refer to a combination of a number and a year or a fiscal year, a combination of the era name, a number, and a year, a one-digit or two-digit number and a month, a combination of a one-digit or two-digit number and a day, or the like). Omuro does not teach wherein the determining section outputs a plurality of candidate character strings that are candidates for the recognized character string according to the scan setting. Thukkaram, in the same field of endeavor of OCR scan analysis teaches wherein the determining section outputs a plurality of candidate character strings that are candidates for the recognized character string according to the scan setting ([col. 2 ln. 66-67] An OCR module performs OCR of a screen capture image of the at least one region to detect words found in the at least one region. A page identification module identifies a page of the application based on a match of the detected words to a word map. [col. 5 ln. 1-24] In one implementation, a keyword matching module 118 is configured to determine, from the post-processed OCR results, a set of matching keywords. The page is identified in page identification module 120, which may identify the page context regarding page attributes (e.g., a calendar page, a chat page, a teleconference page, a data storage page, etc.). In one implementation, the current context of an application (e.g., page name) is identified using the group of keywords present on that page (e.g., words that are user interface elements identified from the OCR). In some implementations, a page change detection module 122 detects page changes. For example, a page change can be identified based on detecting changes in keywords. For example, a selected threshold change in the percentage of keywords may be used to identify a page change. In one implementation an algorithm decides if a particular set or subset of words belongs to a particular application screen. The algorithm may be based on a distance algorithm in which a pre-selected threshold change in the detected word on a page is used to identify a page change. In one implementation, the threshold change in the distance algorithm is set to 50%. When the change in text is at least 50% the algorithm concludes that there is a change in page). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Omuro with the teachings of Thukkaram to output a plurality of candidate character strings that are candidates for the recognized character string according to the scan setting because "an administrator configures the page identification engine 106 for a particular target application. A particular target application may, for example, have certain sections of the screen that have keywords uniquely identifying the page from other pages of the same target application. The configuration process may include, for example, the configuration administrator selecting one or more settings" [Thukkaram col. 5 ln. 26-34]. Regarding claim 4, Omuro and Thukkaram teach the system of claim 1. Thukkaram teaches wherein the scan setting includes a color setting for the scanning, and the determining section outputs a larger number of candidate character strings that are candidates for the recognized character string when the scanning is performed with a color setting in which accuracy of the character recognition is set to a first level than a number of candidate character strings that are candidates for the recognized character string and are output when the scanning is performed when the scanning is performed with a color setting in which the accuracy of the character recognition is set to a second level that is higher than the first level ([pg. 4 ln. 24-30] The page identification engine 106 uses optical character recognition (OCR) to identify page information. In one implementation, to improve the accuracy of the OCR, an OCR pre-processing module 112 is included to perform one or more operations to improve the accuracy of OCR. Additionally, in one implementation an OCR post-processing module 116 is included to improve the accuracy of the OCR. [col. 8 ln. 16-29] The pre-processing prior to OCR may include performing pre-processing to make it easier for the OCR to distinguish text from its background. There are examples of applications in which the text is written as white text on a black background. This can make OCR less accurate. Black text on a white background is better for OCR accuracy. There are similar issues with colored text of a first color on a background of a different color. In one implementation, the pre-processing converts the image to grayscale, and then performs thresholding and binarization. The thresholding is selected to achieve a clear differentiation between the foreground and the background, where the foreground is the portion of the image containing text and the background is the non-text region. [col. 8l n. 31-51] OCR typically generates a single big sentence with all the found words separated by spaces and tabs to match the actual positions of the words in the image. This big sentence contains misspelled words, unwanted special characters, and connected adjacent letters. This can be cleaned up and converted into an array of words. The array of words can be used for page identification system). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Omuro with the teachings of Thukkaram to include a color setting in which the accuracy of the character recognition varies based on the color because "Optical Character Recognition (OCR) [identifies] words on the page and use the identified words to determine the page… pre-processing before OCR is performed to improve the accuracy of the OCR. In some implementations, this may include resizing to increase a font size and performing an operation to distinguish text from its background" [Thukkaram col. 2 ln. 2-11]. Regarding claim 7, Omuro teaches a non-transitory computer-readable storage medium storing a scanning program for causing a computer to ([0002] The present disclosure relates to a scanning system, a non-transitory computer-readable storage medium storing a program, and a method for generating scan data in the scanning system. [0023] The control program 13a is firmware for the multi-function printer 1 to execute various processes. The CPU 11 executes a scanning process described later (refer to FIGS. 8 and 9) and a copying process, a printing process, and the like on the basis of the control program 13a): instruct a scanner to perform scanning to read an image in accordance with a scan setting ([0035] The touch panel 14a presents various types of information to the user as well as receives various types of operations from the user. For example, the touch panel 14a displays a read-aloud-setting screen DA (refer to, for example, FIG. 4) described later and receives read aloud settings set by the user. [0053] In the character option group 22, any one or more among options “large character size”, “decorative character”, “colored character”, and “date”. [0064] When the first setting completion button 32 is selected, the multi-function printer 1 causes the read-aloud-setting values that have been set on the read-aloud-setting screen DA to be stored in the setting value storage area 13e, completing the read aloud settings. [0076] the multi-function printer 1 also speaks the explanation character string 72 including characters indicating a date according to a selection result of the character option group 22 on the third read-aloud-setting screen DA3 (refer to FIG. 6). [0043] The generating section 110 generates scan data by scanning the original document 50 with the image reading mechanism 16); perform character recognition on image data indicating the read image to recognize a character string ([0031] The character data storage area 13c stores character data used for an OCR process. The CPU 11 extracts characters corresponding to character data stored in the character data storage area 13c by performing the OCR process and causes the extracted characters to be spoken, as a word corresponding to an image recognition result, from the speaker circuit 17. [0062] The phrase “characters indicating a date” refer to a combination of a number and a year or a fiscal year, a combination of the era name, a number, and a year, a one-digit or two-digit number and a month, a combination of a one-digit or two-digit number and a day, or the like). Omuro does not teach output a plurality of candidate character strings according to the scan setting, and the image data, the candidate character strings being candidates for the recognized character string. Thukkaram, in the same field of endeavor of OCR scan analysis, teaches output a plurality of candidate character strings according to the scan setting, and the image data, the candidate character strings being candidates for the recognized character string ([col. 2 ln. 66-67] An OCR module performs OCR of a screen capture image of the at least one region to detect words found in the at least one region. A page identification module identifies a page of the application based on a match of the detected words to a word map. [col. 5 ln. 1-24] In one implementation, a keyword matching module 118 is configured to determine, from the post-processed OCR results, a set of matching keywords. The page is identified in page identification module 120, which may identify the page context regarding page attributes (e.g., a calendar page, a chat page, a teleconference page, a data storage page, etc.). In one implementation, the current context of an application (e.g., page name) is identified using the group of keywords present on that page (e.g., words that are user interface elements identified from the OCR). In some implementations, a page change detection module 122 detects page changes. For example, a page change can be identified based on detecting changes in keywords. For example, a selected threshold change in the percentage of keywords may be used to identify a page change. In one implementation an algorithm decides if a particular set or subset of words belongs to a particular application screen. The algorithm may be based on a distance algorithm in which a pre-selected threshold change in the detected word on a page is used to identify a page change. In one implementation, the threshold change in the distance algorithm is set to 50%. When the change in text is at least 50% the algorithm concludes that there is a change in page). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the medium of Omuro with the teachings of Thukkaram to output a plurality of candidate character strings that are candidates for the recognized character string according to the scan setting because "an administrator configures the page identification engine 106 for a particular target application. A particular target application may, for example, have certain sections of the screen that have keywords uniquely identifying the page from other pages of the same target application. The configuration process may include, for example, the configuration administrator selecting one or more settings" [Thukkaram col. 5 ln. 26-34]. Regarding claim 8, Omuro teaches a method for producing output matter, the method comprising ([0002] The present disclosure relates to a scanning system, a non-transitory computer-readable storage medium storing a program, and a method for generating scan data in the scanning system. [0023] The control program 13a is firmware for the multi-function printer 1 to execute various processes. The CPU 11 executes a scanning process described later (refer to FIGS. 8 and 9) and a copying process, a printing process, and the like on the basis of the control program 13a): causing a scanner to perform scanning to read an image in accordance with a scan setting ([0035] The touch panel 14a presents various types of information to the user as well as receives various types of operations from the user. For example, the touch panel 14a displays a read-aloud-setting screen DA (refer to, for example, FIG. 4) described later and receives read aloud settings set by the user. [0053] In the character option group 22, any one or more among options “large character size”, “decorative character”, “colored character”, and “date”. [0064] When the first setting completion button 32 is selected, the multi-function printer 1 causes the read-aloud-setting values that have been set on the read-aloud-setting screen DA to be stored in the setting value storage area 13e, completing the read aloud settings. [0076] the multi-function printer 1 also speaks the explanation character string 72 including characters indicating a date according to a selection result of the character option group 22 on the third read-aloud-setting screen DA3 (refer to FIG. 6). [0043] The generating section 110 generates scan data by scanning the original document 50 with the image reading mechanism 16); performing character recognition on image data indicating the read image to recognize a character string ([0031] The character data storage area 13c stores character data used for an OCR process. The CPU 11 extracts characters corresponding to character data stored in the character data storage area 13c by performing the OCR process and causes the extracted characters to be spoken, as a word corresponding to an image recognition result, from the speaker circuit 17. [0062] The phrase “characters indicating a date” refer to a combination of a number and a year or a fiscal year, a combination of the era name, a number, and a year, a one-digit or two-digit number and a month, a combination of a one-digit or two-digit number and a day, or the like). Omuro does not teach producing output matter including a number of candidate character strings according to the scan setting, and the image data, the candidate character strings being candidates for the recognized character string. Thukkaram, in the same field of endeavor of OCR scan analysis, teaches producing output matter including a number of candidate character strings according to the scan setting, and the image data, the candidate character strings being candidates for the recognized character string ([col. 2 ln. 66-67] An OCR module performs OCR of a screen capture image of the at least one region to detect words found in the at least one region. A page identification module identifies a page of the application based on a match of the detected words to a word map. [col. 5 ln. 1-24] In one implementation, a keyword matching module 118 is configured to determine, from the post-processed OCR results, a set of matching keywords. The page is identified in page identification module 120, which may identify the page context regarding page attributes (e.g., a calendar page, a chat page, a teleconference page, a data storage page, etc.). In one implementation, the current context of an application (e.g., page name) is identified using the group of keywords present on that page (e.g., words that are user interface elements identified from the OCR). In some implementations, a page change detection module 122 detects page changes. For example, a page change can be identified based on detecting changes in keywords. For example, a selected threshold change in the percentage of keywords may be used to identify a page change. In one implementation an algorithm decides if a particular set or subset of words belongs to a particular application screen. The algorithm may be based on a distance algorithm in which a pre-selected threshold change in the detected word on a page is used to identify a page change. In one implementation, the threshold change in the distance algorithm is set to 50%. When the change in text is at least 50% the algorithm concludes that there is a change in page). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the method of Omuro with the teachings of Thukkaram to output a plurality of candidate character strings that are candidates for the recognized character string according to the scan setting because "an administrator configures the page identification engine 106 for a particular target application. A particular target application may, for example, have certain sections of the screen that have keywords uniquely identifying the page from other pages of the same target application. The configuration process may include, for example, the configuration administrator selecting one or more settings" [Thukkaram col. 5 ln. 26-34]. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Omuro in view of Thukkaram and King (US20140294302A1). Regarding claim 6, Omuro and Thukkaram teach the system of claim 1. King, in the same field of endeavor of word scanning systems, teaches wherein the receiving section receives units in which the image data is divided into different files ([0256] New and upcoming file systems and their associated databases often have the ability to store a variety of metadata associated with each file. Traditionally, this metadata has included such things as the ID of the user who created the file, the dates of creation, last modification, and last use. Newer file systems allow such extra information as keywords, image characteristics, document sources and user comments to be stored, and in some systems this metadata can be arbitrarily extended. File systems can therefore be used to store information that would be useful in implementing the current system. For example, the date when a given document was last printed can be stored by the file system, as can details about which text from it has been captured from paper using the described system, and when and by whom. [0095] Indices may be maintained on several machines on a corporate network. Partial indices may be downloaded to the capture device, or to a machine close to the capture device. Separate indices may be created for users or groups of users with particular interests, habits or permissions. An index may exist for each filesystem, each directory, even each file on a user's hard disk. Indexes are published and subscribed to by users and by systems. It will be important, then, to construct indices that can be distributed, updated, merged and separated efficiently). Therefore, it would have been obvious to a person of ordinary skill in the art at the time that the invention was made to modify the system of Omuro with the teachings of King to have the image data divided into different files because "Operating systems are also starting to incorporate search engine facilities that allow users to find local files more easily. These facilities can be advantageously used by the system. It means that many of the search-related concepts discussed in Sections 3 and 4 apply not just to today's Internet-based and similar search engines, but also to every personal computer" [King 0257]. Allowable Subject Matter Claims 2-3 and 5 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if all withstanding rejections and objections were overcome. Regarding claims 2-3, Omuro teaches wherein the scan setting includes a setting for a scanning resolution ([0085] In S13, the multi-function printer 1 performs a main scan of the original document 50 by using the image reading mechanism 16 to generate main scan data. The main scan data is scan data with a higher resolution than the pre-scan data generated in S02 in FIG. 8. When various settings, such as a read resolution setting, are set before the main scan button 14c is operated). Further regarding claim 3, Omuro teaches wherein the receiving section receives a setting for a specified word ([0035] The touch panel 14a presents various types of information to the user as well as receives various types of operations from the user. For example, the touch panel 14a displays a read-aloud-setting screen DA (refer to, for example, FIG. 4) described later and receives read aloud settings set by the user. [0053] In the character option group 22, any one or more among options “large character size”, “decorative character”, “colored character”, and “date”. [0064] When the first setting completion button 32 is selected, the multi-function printer 1 causes the read-aloud-setting values that have been set on the read-aloud-setting screen DA to be stored in the setting value storage area 13e, completing the read aloud settings. [0076] the multi-function printer 1 also speaks the explanation character string 72 including characters indicating a date according to a selection result of the character option group 22 on the third read-aloud-setting screen DA3 (refer to FIG. 6)). The following limitations were not found to be taught in the art: the determining section outputs a candidate character string that is among the candidate character strings obtained as a result of the recognition and is accurate with a probability equal to or higher than a threshold, and the threshold when the scanning is performed at a first resolution is lower than the threshold when the scanning is performed at a second resolution that is higher than the first resolution (claim 2); and the determining section outputs the candidate character string such that a number of characters that are included in the candidate character string output by the determining section and do not match the specified word is equal to or smaller than a predetermined number of characters, and the predetermined number of characters when the scanning is performed at a first resolution is larger than the predetermined number of characters when the scanning is performed at a second resolution that is higher than the first resolution (claim 3). Regarding claim 5, which depends from claim 3, King teaches wherein the determining section determines whether the specified word matches any of the candidate character strings, and the scanning system further comprises a processing section that performs processing corresponding to the specified word when the specified word matches any of the candidate character strings ([Abstract] A system for processing text captured from rendered documents is described. The system receives a sequence of one or more words optically or acoustically captured from a rendered document by a user. The system identifies among words of the sequence a word with which an action has been associated. The system then performs the associated action with respect to the user. [0029] Text from a rendered document is captured 100, typically in optical form by an optical scanner or audio form by a voice recorder, and this image or sound data is then processed 102, for example to remove artifacts of the capture process or to improve the signal-to-noise ratio. A recognition process 104 such as OCR, speech recognition, or autocorrelation then converts the data into a signature, comprised in some embodiments of text, text offsets, or other symbols. [0564] In various embodiments, information associating words or phrases with actions (e.g., markup information) can be stored in the capture device 302). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Jacqueline R Zak whose telephone number is (571)272-4077. The examiner can normally be reached M-F 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Emily Terrell can be reached at (571) 270-3717. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JACQUELINE R ZAK/Examiner, Art Unit 2666 /EMILY C TERRELL/Supervisory Patent Examiner, Art Unit 2666
Read full office action

Prosecution Timeline

Apr 12, 2024
Application Filed
Mar 02, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12586340
PIXEL PERSPECTIVE ESTIMATION AND REFINEMENT IN AN IMAGE
2y 5m to grant Granted Mar 24, 2026
Patent 12462343
MEDICAL DIAGNOSTIC APPARATUS AND METHOD FOR EVALUATION OF PATHOLOGICAL CONDITIONS USING 3D OPTICAL COHERENCE TOMOGRAPHY DATA AND IMAGES
2y 5m to grant Granted Nov 04, 2025
Patent 12373946
ASSAY READING METHOD
2y 5m to grant Granted Jul 29, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
67%
Grant Probability
55%
With Interview (-11.4%)
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 12 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month