Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
The Action is responsive to the Amendments and Remarks filed on 12/3/2025. Claims 1-3 and 5-18 are pending claims. Claims 1, 7, and 8 are written in independent form. Claim 4 has been previously cancelled. Claims 9-18 are new claim.
Priority
Acknowledgment is made of a claim for foreign priority to JP2023-018181, filed 2/9/2023, under 35 U.S.C. § 119(a)-(d) or (f). Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Claim Objections
Claim 17 is objected to because of the following informalities:
Claim 17 appears to recite a typographical error of “imaged data” when the intent is understood as reciting “image [[imaged]] data” as is consistent with the rest of the claims including similarly recited new dependent claim 18.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-3 and 5-18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
Regarding Claims 1, 7, and 8, performing the limitation “display a screen…including…a first item corresponding to a region specified in scanned image data” followed by the limitation/step of “obtaining image data by a single scan performed based on a selection of the displayed object corresponding to the set rule” renders the claims indefinite because it is unclear how an “item corresponding to a region specified in scanned image data” can be displayed prior to the step of actually scanning the image data (“obtain[ing] image data by a single scan performed…”).For the purpose of compact prosecution, and based on Paragraphs [0049] & [0061] and Figures 7 and 12, the first limitation is being understood as reciting “display a screen…including…a first item corresponding to a region specified to be processed after scanning image data”.
Dependent Claims 2-3, 5-6, and 9-18 inherit the deficiencies of their parent claims and are therefore being rejected based upon the same reason(s) stated for their parent claims.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-3 and 5-18 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-patentable subject matter. The claimed invention is directed to one or more abstract ideas without significantly more. The judicial exception is not integrated into a practical application. The claims do not include additional elements that are sufficient to amount to significantly more than judicial exception. The eligibility analysis in support of these findings is provided below.
As per Claims 1, 7, and 8,
STEP 1:In accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), the claimed apparatus (claims 1-3, 5-6, and 9-18), method (claim 7), and non-transitory computer-readable storage medium (claim 8) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1.
STEP 2A Prong One:The independent claims 1, 7, and 8 recite the following limitations directed to an abstract idea:
Set a rule relating to a file name by connecting items selected by a user from among the items included in the screen;
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and evaluating items selected by a user from among items included in the screen, and based on the observation and evaluation, making a judgement and/or opinion to connect items selected by the user and set a rule relating to a file name based on the connecting.
obtain image data by a single scan performed based on a selection of the displayed object corresponding to the set rule;
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing the image data and a selection of the displayed object corresponding to the set rule, and based on the observation, evaluating and processing the observation of image data .
Generate a plurality of files from the image data by dividing the image data,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and evaluating the image data, and based on the observation and evaluation, making a judgement and/or opinion to divide the image data into a plurality of files.
wherein each file of the plurality of files is generated from each of the divided image data;
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and making judgement that organizes the observed image data into separate files including at least a region/section of the image data previously identified through observation and evaluation.
It is noted that an image is already a type of file comprising regions of the image, whether it is printed or digital.
Analyze each of the divided image data and automatically extract respective character strings corresponding to the selected items connected in the rule from the obtained image data,
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and evaluating the divided image data, and based on the observation and evaluation, making a judgement and/or opinion of respective character strings corresponding to the selected items connected in the rule from the obtained image data.
Automatically generate a respective filename of each file of the plurality of files using the extracted respective character strings;
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and evaluating the character string extracted from the at least one region included in the corresponding image data of the file, and making a judgement or opinion based on the observation and evaluation to create a respective filename for each of the plurality of files using the respective observed or extracted character string extracted from the at least one region included in the corresponding image data.
Determine whether at least two generated filenames among the generated filenames of the plurality of files are identical; and
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing the generated filenames of the plurality of files, and based on the observation, evaluating whether at least two generated filenames are identical;
In a case where it is determined that the at least two generated filenames among the generated filenames of the plurality of files are identical, add an identifier to determine filenames so that the at least two identical filenames are distinguished.
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and evaluating similarities between filenames and making a judgement and/or opinion that two filenames are identical and then resolving the duplicate names by making a judgement to add an identifier to differentiate the filenames.
STEP 2A Prong Two:Claims 1, 7, and 8 recite that the steps performed using “at least one memory”, “at least one processor”, and “a non-transitory computer-readable storage medium”, which are a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application.
The claim(s) recite the following additional elements:
Display a screen for setting a rule relating to a file name,
The limitation recites a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
The screen including items which can be selected by a user and include a first item corresponding to a region specified in scanned image data by a user and a second item corresponding to an attribute;
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being displayed as part of the “screen” as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Display another screen including
The limitation recites a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
an object corresponding to the set rule;
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being displayed as part of the “another screen” as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
the extracted respective character strings including at least one of a character string recognized by character recognition process on the region corresponding to the first item specified by the user and a character string recognized by the character recognition process and corresponding to the attribute corresponding to the second item;
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being included in the extracted respective character strings as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
STEP 2B:
The conclusions for the mere implementation using a computer are carried over and does not provide significantly more.
With respect to “The screen including items which can be selected by a user and include a first item corresponding to a region specified in scanned image data by a user and a second item corresponding to an attribute;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to “an object corresponding to the set rule;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to “the extracted respective character strings including at least one of a character string recognized by character recognition process on the region corresponding to the first item specified by the user and a character string recognized by the character recognition process and corresponding to the attribute corresponding to the second item;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
Looking at the claim as a whole does not change this conclusion and the claim is ineligible.
As per Dependent Claims 2-3, 5-6, and 9-18,
STEP 1:In accordance with Step 1 of the eligibility inquiry (as explained in MPEP 2106), the claimed apparatus (claims 1-3, 5-6, and 9-18), method (claim 7), and non-transitory computer-readable storage medium (claim 8) are directed to one of the eligible categories of subject matter and therefore satisfies Step 1.
STEP 2A Prong One:The dependent claims 2-3, 5-6, and 9-18 recite the following limitations directed to an abstract idea:
The limitation of Dependent Claim 3 includes the step(s) of:
Wherein, in a case where the respective filename of a file, among the plurality of files to be transmitted, is identical to that of a file stored in a predetermined folder of the external server, an identifier, to distinguish the respective filename of the file to be transmitted, is added to the respective filename.
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and evaluating similarities between filenames at different locations and making a judgement and/or opinion that two filenames at the different locations are the same and then resolving the duplicate names by making a judgement to add an identifier to differentiate the filenames followed.
The limitation of Dependent Claim 16 includes the step(s) of:
Wherein the image data is divided in numbers of pages specified by a user.
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and evaluating the image data and a number of pages specified by a user, and based on the observation and evaluation, making a judgement and/or opinion to divide the image data in the numbers of pages specified by the user.
The limitation of Dependent Claim 17 includes the step(s) of:
Wherein the image data is divided with barcodes recognized in the image data.
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and evaluating the image data and barcodes, and based on the observation and evaluation, making a judgement and/or opinion to divide the image data with barcodes recognized in the image data.
The limitation of Dependent Claim 18 includes the step(s) of:
Wherein the image data is divided with white sheets recognized in the image data.
The limitation recites a mental process of observation, evaluation, judgement, and/or opinion capable of being performed by the human mind by observing and evaluating the image data and white sheets, and based on the observation and evaluation, making a judgement and/or opinion to divide the image data with white sheets recognized in the image data.
STEP 2A Prong Two:The claim(s) recite the following additional elements:
The limitation of Dependent Claim 2 includes the step(s) of:
Wherein each file of the plurality of files is transmitted to an external server.
The limitation recites an insignificant extra solution activity as sending or receiving data (ie. Mere data gathering), in particular transmitting files as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 3 includes the step(s) of:
the file to be transmitted is transmitted to the external server.
The limitation recites an insignificant extra solution activity as sending or receiving data (ie. Mere data gathering), in particular transmitting files as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 5 includes the step(s) of:
Wherein the item includes an item corresponding to the identifier to distinguish the filename.
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being used to distinguish the filename as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 6 includes the step(s) of:
The information processing apparatus is further configured to communicate with an image processing apparatus including a scanner, wherein the image data is obtained by the scanner of the image processing apparatus scanning a plurality of documents in the single scan.
The limitation recites a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.The limitation further recites an insignificant extra solution activity as sending or receiving data (ie. Mere data gathering), in particular communicating between the apparatus and a generic scanner and obtaining image data as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 9 includes the step(s) of:
Wherein the rule is a rule for transmitting the obtained image data to a specific transmission destination; and
The limitation recites an insignificant extra-solution activity as selecting a particular type of data/instructions being used to represent the rule as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Wherein the specific transmission destination is the external server.
The limitation recites an insignificant extra-solution activity as selecting a particular type of destination being used to represent the transmission destination as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 10 includes the step(s) of:
Wherein rules including the rule are set for respective storage servers.
The limitation recites an insignificant extra-solution activity as selecting a particular type of data/instructions being used to represent rules as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 11 includes the step(s) of:
Wherein rules including the rule are set for respective use cases.
The limitation recites an insignificant extra-solution activity as selecting a particular type of data/instructions being used to represent rules as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 12 includes the step(s) of:
Wherein rules including the rule are set for respective types of work.
The limitation recites an insignificant extra-solution activity as selecting a particular type of data/instructions being used to represent rules as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
The limitation of Dependent Claim 13 includes the step(s) of:
Wherein the items are connected by a drag and drop operation on the items.
The limitation recites a high-level recitation of generic computer components and represents mere instructions to apply (associating/connecting items via drag and drop interface) on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
The limitation of Dependent Claim 14 includes the step(s) of:
Wherein x button is displayed on the items and one of the items can be deleted by an operation on the x button displayed on the one of the items.
The limitation recites a high-level recitation of generic computer components and represents mere instructions to apply (deleting items via an associated x button) on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
The limitation of Dependent Claim 15 includes the step(s) of:
Wherein the items further include at least one of a name of a login user, time, date, a device location, a device name, a serial number of a device, a delimiter, a barcode value, and a QR code value.
The limitation recites an insignificant extra-solution activity as selecting a particular type of data being used to represent items as identified in MPEP 2106.05(g) and does not provide integration into a practical application.
Viewing the additional limitations together and the claim as a whole, nothing provides integration into a practical application.
STEP 2B:
The conclusions for the mere implementation using a computer are carried over and does not provide significantly more.
With respect to Claim 2 reciting “Wherein each file of the plurality of files is transmitted to an external server.” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 3 reciting “the file to be transmitted is transmitted to the external server.” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 5 reciting “Wherein the item includes an item corresponding to the identifier to distinguish the filename.” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 6 reciting “The information processing apparatus is further configured to communicate with an image processing apparatus including a scanner, wherein the image data is obtained by the scanner of the image processing apparatus scanning a plurality of documents in the single scan.” identified as insignificant extra-solution activity above this is also WURC as court-identified see MPEP 2106.05(d)(II)(i).
With respect to Claim 9 reciting “Wherein the rule is a rule for transmitting the obtained image data to a specific transmission destination;” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 9 reciting “Wherein the specific transmission destination is the external server.” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 10 reciting “Wherein rules including the rule are set for respective storage servers.” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 11 reciting “Wherein rules including the rule are set for respective use cases.” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 12 reciting “Wherein rules including the rule are set for respective types of work.” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
With respect to Claim 15 reciting “Wherein the items further include at least one of a name of a login user, time, date, a device location, a device name, a serial number of a device, a delimiter, a barcode value, and a QR code value.” identified as insignificant extra-solution activity above this is also WURC when claimed in a merely generic manner as court-identified see MPEP 2106.05(d)(II)(iv).
Looking at the claim as a whole does not change this conclusion and the claim is ineligible.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-2 and 5-15 are rejected under 35 U.S.C. 103 as being unpatentable over Tokita (U.S. Pre-Grant Publication No. 2020/0174637) and further in view of Ono et al. (U.S. Patent No. 8,953,065, hereinafter referred to as Ono) and Matsuki (U.S. Pre-Grant Publication No. 2006/0174054).
Regarding Claim 1:
Tokita teaches an information processing apparatus comprising:
At least one memory configured to store instructions (Para. [0006]); and
At least one processor communicatively connected to the at least one memory and configured to execute the stored instructions to (Para. [0006]):
Display a screen for setting a rule relating to a file name;
Tokita teaches “A scan instruction unit 421 receives information input by the user through the application display unit 423, and together with scan settings and transfer settings included in the input information” (Para. [0037]) thereby teaching a screen (application display unit 423) for setting/inputting scan instruction information (a rule relating to a file name of image data to be scanned and named/titled). Tokita further teaches “If the scan request including the scan settings is received from the scan instruction unit 421, the scan unit 411 controls the scanner 222 via the scanner I/F 217 to read an image on a document, thereby generating image data.” (Para. [0039])
Obtain image data by a single scan performed based on a selection of the displayed object corresponding to the set rule;
Tokita teaches “scanner 222 reads a document to generate image data” (Para. [0030]). Tokita further teaches “if it is determined that the ‘start scan’ key 621 is pressed, then step S503, using the setting items selected using the scan setting keys 601 to 605, the scan instruction unit 421 transmits a scan request to execute a scan process” (Para. [0055]) and “In step S504, image data scanned according to the scan request is internally transferred to the application reception unit 422 via the transfer unit 412 by using FTP” (Para. [0056]).
Display another screen including an object corresponding to the set rule;
Tokita teaches “the application display unit 423 displays a scan setting screen 600 on the operation unit 220. Using keys 601 to 605 through the scan setting screen 600, the user of the MFP 101 makes settings regarding a scan operation to be performed by the scan unit 411 and gives an instruction to start a scan” (Paras. [0053] – [0054]) where the scan setting screen displays objects corresponding to different scanning rules (Fig. 6).
Generate a plurality of files from the image data by dividing the image data, wherein each file of the plurality of files is generated from each of the divided image data;
Tokita teaches “the document division screen display unit 426 sets document division information so that the scanned image data including a plurality of pages is divided according to documents” (Para. [0061]).
Tokita further teaches “A scanner I/F 217 connects a scanner 222 and the control unit 210. The scanner 222 reads a document to generate image data, and inputs the image data to the control unit 210 via the scanner I/F 217. The MFP 101 can convert the image data generated by the scanner a file, and transmit the converted file or email the file.” (Para. [0030]) where “the image analysis unit 425 stores the page image data in association with a first page identifier. As described above, page image data determined as belonging to a different document is assigned a first page identifier (i.e., information indicating a position separating documents), so that if a plurality of documents is collectively read by an auto sheet feeder, the documents are automatically divided for each document.” (Para. [0069]). Therefore, Tokita teaches scanning a plurality of documents at a time to obtain image data using a scanner and dividing the image data for each document.
Analyze each of the divided image data and automatically extract respective character strings corresponding to the selected items connected in the rule from the obtained image data,
Tokita teaches “ If it is determined that the “start scan” key 621 is pressed, then in step S503, using the setting items selected using the scan setting keys 601 to 605, the scan instruction unit 421 transmits a scan request to execute a scan process” (Para. [0055]) and “In step S505, the image analysis unit 425 instructed to perform an analysis by the application reception unit 422 analyzes the received image data” (Para. [0057]) and extracting a feature amount where “the feature amount may be feature information obtained by converting information such as a character string extracted by performing an optical character recognition (OCR) process on an image into a feature vector using a machine learning engine, or may be feature information regarding the layout of a character string or a rule” (Para. [0061]).
the extracted respective character strings including at least one of
a character string recognized by character recognition process on the region corresponding to the first item specified by the user and
Tokita teaches “the feature amount may be feature information obtained by converting information such as a character string extracted by performing an optical character recognition (OCR) process on an image into a feature vector using a machine learning engine” (Para. [0061]) thereby teaching performing a character recognition process on the region specified by the user.It is noted that the region taught by Tokita is understood as the entire image.
a character string recognized by the character recognition process and corresponding to the attribute corresponding to the second item;
Tokita further teaches “If the scan request including the scan settings is received from the scan instruction unit 421, the scan unit 411 controls the scanner 222 via the scanner I/F 217 to read an image on a document, thereby generating image data.” (Para. [0039]) and “the feature amount may be feature information obtained by converting information such as a character string extracted by performing an optical character recognition (OCR) process on an image into a feature vector using a machine learning engine, or may be feature information regarding the layout of a character string or a rule” (Para. [0061])
Automatically generate a respective filename of each file of the plurality of files using the extracted respective character strings;
Tokita teaches “the electronic file storage location path is created by adding a file name to the received host name and folder path. In the present exemplary embodiment, the method for generating the file name is not limited to the above. For example, a character string indicating the date and time of transmission, a character string obtained by performing a character recognition process on the image data, or a character string acquired by input of the user can be used as the file name.” (Para. [0110]).Therefore, Tokita teaches naming settings being set and used corresponding to a region of the image file comprising an optically recognized character string to be used for the filename.
Tokita explicitly teaches all of the elements of the claimed invention as recited above except:
the screen including items which can be selected by a user and include a first item corresponding to a region specified to be processed after scanning image data by a user and a second item corresponding to an attribute;
Set a rule relating to a file name by connecting items selected by a user from among the items included in the screen;
Determine whether at least two generated filenames among the generated filenames of the plurality of files are identical; and
In a case where it is determined that the at least two generated filenames among the generated filenames of the plurality of files are identical, add an identifier to determine filenames so that the at least two identical filenames are distinguished.
However, in the related field of endeavor of capturing and naming image files, Ono teaches:
Determine whether at least two generated filenames among the generated filenames of the plurality of files are identical; and
Ono teaches “the titling unit 282 judges whether or not the same titles are given to a plurality of images and/or a plurality of image groups (S920)” (Col. 15 Lines 59-63).
In a case where it is determined that the at least two generated filenames among the generated filenames of the plurality of files are identical, add an identifier to determine filenames so that the at least two identical filenames are distinguished.
Ono teaches “in case it is judged that the same titles are given to the plurality of images and/or the plurality of image group, the period terminology selecting unit 272 selects a new term which is different from the term selected in 912 and of which time width is shorter than that of the term (S924)…[and] selecting unit 292 selects a new term which is different from the term selected in 916 and of which size is smaller than that of the term (S926)” (Col. 16 Lines 5-28).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Ono and Tokita at the time that the claimed invention was effectively filed, to have combined the renaming of duplicate file names/titles, as taught by Ono with the systems and methods for capturing and transferring electronic files, as taught by Tokita.
One would have been motivated to make such combination because Ono teaches “since the titling unit 282 gives a different title for each of a plurality of albums and/or a plurality of images, the user 180 can easily distinguish the plurality of albums and/or the plurality of images from each other” (Col. 12 Lines 2-5) and it would be obvious to a person having ordinary skill in the art that a user viewing the files would have an improved experience by being able to easily distinguish the files.
Ono and Tokita explicitly teach all of the elements of the claimed invention as recited above except:
the screen including items which can be selected by a user and include a first item corresponding to a region specified in scanned image data by a user and a second item corresponding to an attribute;
Set a rule relating to a file name by connecting items selected by a user from among the items included in the screen;
However, in the related field of endeavor of file management and renaming, Matsuki teaches:
the screen including items which can be selected by a user and include a first item corresponding to a region specified in scanned image data by a user and a second item corresponding to an attribute;
Matsuki teaches a “rename setting window” where “The user manipulates such rename setting window using the input device 104 such as a keyboard, mouse, and the like. The user can give instructions such as specification of a file to be renamed, settings of generation rules of a new filename, execution or cancel of rename processing, and the like to the file management apparatus. In this embodiment, setting values of items that the user can set on the rename setting window are stored, and the state finally set by the user may be resumed and displayed in the second or subsequent launch of the application.” (Para. [0040] & Figs. 3-5).Tokita further teaches “the feature amount may be feature information obtained by converting information such as a character string extracted by performing an optical character recognition (OCR) process on an image into a feature vector using a machine learning engine, or may be feature information regarding the layout of a character string or a rule.” (Para. [0061]) thereby teaching a rule for performing optical character recognition on the region as the entire image.
Set a rule relating to a file name by connecting items selected by a user from among the items included in the screen;
Matsuki teaches a “rename setting window” where “The user manipulates such rename setting window using the input device 104 such as a keyboard, mouse, and the like. The user can give instructions such as specification of a file to be renamed, settings of generation rules of a new filename, execution or cancel of rename processing, and the like to the file management apparatus. In this embodiment, setting values of items that the user can set on the rename setting window are stored, and the state finally set by the user may be resumed and displayed in the second or subsequent launch of the application.” (Para. [0040]).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Matsuki, Ono, and Tokita at the time that the claimed invention was effectively filed, to have combined the renaming tool interface, as taught by Matsuki, with the renaming of duplicate file names/titles, as taught by Ono, and the systems and methods for capturing and transferring electronic files, as taught by Tokita.
One would have been motivated to make such combination because Matsuki teaches checking to determine if new filenames are unusable before execution and “the user can know the presence/absence of a situation that may cause an error upon execution of the rename processing based on the current settings before execution and in real time, and can appropriately change the settings” (Para. [0077]) and it would be obvious to a person having ordinary skill in the art knowing a situation may cause an error before the error occurs and allowing for the appropriate change in the setting to address the cause would reduce the amount of work required to undo errors after/in response to the error occurring.
Regarding Claim 2:
Matsuki, Ono, and Tokita further teach:
Wherein each file of the plurality of files is transmitted to an external server.
Tokita teaches “the application transfer unit 424 externally transfers the electronic files of the respective documents to a folder indicated by the image data storage location path created in step S514 and stores the electronic files in the folder” (Para. [0113]) and “the generated electronic files are transferred to the external file server” (Para. [0117]).
Regarding Claim 5:
Matsuki, Ono, and Tokita further teach:
Wherein the items include an item corresponding to the identifier to distinguish the filename.
Ono teaches “since the titling unit 282 gives a different title for each of a plurality of albums and/or a plurality of images, the user 180 can easily distinguish the plurality of albums and/or the plurality of images from each other” (Col. 11 Line 61 – Col. 12 Line 5).
Regarding Claim 6:
Matsuki, Ono, and Tokita further teach:
The information processing apparatus is further configured to communicate with an image processing apparatus including a scanner, wherein the image data is obtained by the scanner of the image processing apparatus scanning a plurality of documents in the single scan.
Tokita teaches “A scanner I/F 217 connects a scanner 222 and the control unit 210. The scanner 222 reads a document to generate image data, and inputs the image data to the control unit 210 via the scanner I/F 217. The MFP 101 can convert the image data generated by the scanner a file, and transmit the converted file or email the file.” (Para. [0030]) where “the image analysis unit 425 stores the page image data in association with a first page identifier. As described above, page image data determined as belonging to a different document is assigned a first page identifier (i.e., information indicating a position separating documents), so that if a plurality of documents is collectively read by an auto sheet feeder, the documents are automatically divided for each document.” (Para. [0069]). Therefore, Tokita teaches scanning a plurality of documents at a time to obtain image data using a scanner.
Regarding Claim 7:
All of the limitations herein are similar to some or all of the limitations of Claim 1.
Regarding Claim 8:
Some of the limitations herein are similar to some or all of the limitations of Claim 1.
Matsuki, Ono, and Tokita further teach:
A non-transitory computer-readable storage medium storing a computer program for causing the computer to perform a method for controlling an information processing apparatus (Tokita – Para. [0119]).
Regarding Claim 9:
Matsuki, Ono, and Tokita further teach:
Wherein the rule is a rule for transmitting the obtained image data to a specific transmission destination; and
Tokita teaches “The user of the MFP 101 sets an upload destination (sets an external transfer destination) on the upload setting screen 1300, and the application transfer unit 424 executes the process of uploading files to the file server 102 as the set upload destination.” (Para. [0104])
Wherein the specific transmission destination is the external server.
Tokita teaches “The user of the MFP 101 sets an upload destination (sets an external transfer destination) on the upload setting screen 1300, and the application transfer unit 424 executes the process of uploading files to the file server 102 as the set upload destination.” (Para. [0104]).
Regarding Claim 10:
Matsuki, Ono, and Tokita further teach:
Wherein rules including the rule are set for respective storage servers.
Tokita teaches “The user of the MFP 101 sets an upload destination (sets an external transfer destination) on the upload setting screen 1300, and the application transfer unit 424 executes the process of uploading files to the file server 102 as the set upload destination.” (Para. [0104]).
It is noted that a set of servers can be understood as a set comprising one server. It is further noted that Tokita teaches “The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions” as “other embodiments” (Para. [0119]) thereby teaching multiple different computers and servers operating across a network.
Regarding Claim 11:
Matsuki, Ono, and Tokita further teach:
Wherein rules including the rule are set for respective use cases.
Matsuki teaches rules including different use cases such as “set same filename with different extensions” and “copy and rename” (Para. [0059] – [0060])
Regarding Claim 12:
Matsuki, Ono, and Tokita further teach:
Wherein rules including the rule are set for respective types of work.
Matsuki teaches “A "same filename with different extensions mode" selection check box 402 is provided to make the user specify the method of assigning new filenames for files to be renamed such as RAW data, JPEG data, and the like, which have the same filename except for their extensions. If the "same filename with different extensions mode" selection check box 402 is not checked, the same filename with different extensions mode is turned off, and no special processing is made. If the "same filename with different extensions mode" selection check box 402 is checked, the same filename with different extensions mode is turned on, and new filenames for files to be renamed which have the same filename except for their extensions are generated to have the same new filename except for their extensions.” (Para. [0059]).
Regarding Claim 13:
Matsuki, Ono, and Tokita further teach:
Wherein the items are connected by a drag and drop operation on the items.
Matsuki teaches “More specifically, the user selects the file to be renamed or folder on the file list display window 605, and drags and drops it on an arbitrary area of the rename setting window shown in FIG. 6C. In response to this operation, the application that provides the file list display window 605 notifies the file management application of the information of the dragged and dropped file. As a result, the file management application can additionally display the filename of the file selected by the user on the original filename list display window 305.” (Para. [0048]) thereby teaching using a drag and drop operation in the file renaming tool to connect items.
Regarding Claim 14:
Matsuki, Ono, and Tokita further teach:
Wherein x button is displayed on the items and one of the items can be deleted by an operation on the x button displayed on the one of the items.
Matsuki teaches “The selection items of the basic setting menu 303, format setting menu 304, and the like are not limited to those in the above example, and other items may be added or the items may be changed or deleted.” (Para. [0067]).
Matsuki further teaches an “add” and “delete” button for adding and removing items (Para. [0042]).It is also noted that Matsuki teaches using an x button for deleting/closing items by teaching the three boxes at the top right of Figures 3, 6A-6C, 9A-9B, 11, and 12, which are understood in the art as being placeholders for the well-known minimize, maximize, and “x” close/delete buttons in at least the Windows operating system.
Regarding Claim 15:
Matsuki, Ono, and Tokita further teach:
Wherein the items further include at least one of a name of a login user, time, date, a device location, a device name, a serial number of a device, a delimiter, a barcode value, and a QR code value.
Matsuki teaches at least “the filename setting menu 301 is composed of three drop-down lists which allow the filename to include "arbitrary character string", "photographing date/time", and "serial number".” (Para. [0054]).
Claim(s) 3 is rejected under 35 U.S.C. 103 as being unpatentable over Matsuki, Tokita, and Ono, and further in view of Cazier (U.S. Pre-Grant Publication No. 2003/0200229).
Regarding Claim 3:
Matsuki, Ono, and Tokita explicitly teach all of the elements of the claimed invention as recited above except:
Wherein, in a case where the respective filename of a file, among the plurality of files to be transmitted, is identical to that of a file stored in a predetermined folder of the external server, an identifier, to distinguish the respective filename of the file to be transmitted, is added to the respective filename and the file to be transmitted is transmitted to the external server.
However, in the related field of endeavor of automatic renaming of files during file management, Cazier teaches:
Wherein, in a case where the respective filename of a file, among the plurality of files to be transmitted, is identical to that of a file stored in a predetermined folder of the external server, an identifier, to distinguish the respective filename of the file to be transmitted, is added to the respective filename and the file to be transmitted is transmitted to the external server.
Cazier teaches comparing file names between a source location and a destination location prior to moving the file from the source location to the destination location (Figure 1) where “When the system determines that a file name is a duplicate of a file name at the destination location, the system will check to see if it is the same file or a different file (112). When it is a different file with a duplicate name, the system will automatically rename one of the files to a name not already in use in the directory (114)” (Para. [0018]). Cazier further teaches adding an identifier to the filename to distinguish the filename from the detected duplicate by teaching “changing the first letter in the file name is only an example of a way to change the file names. There are many ways to change file names and retain their sequential display order and this invention is not limited to only changing the first letter.” (Para. [0032]).
Cazier also teaches “rename one of the files and then move file from source location to destination” (Figure 1 Step 114).
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Cazier, Matsuki, Ono, and Tokita at the time that the claimed invention was effectively filed, to have combined the comparison and renaming of duplicate files between a source and target destination, as taught by Cazier, with the renaming tool interface, as taught by Matsuki, the renaming of duplicate file names/titles, as taught by Ono, and the systems and methods for capturing and transferring electronic files, as taught by Tokita.
One would have been motivated to make such combination because Cazier teaches “a data management system can improve the saving and transferring of data files by automatically renaming new data files when transferring files from a source location to a destination location” (Para. [0008]) and Tokita teaches transferring data from a source location to a transmission destination without determining if any files at the transmission destination have duplicate file names that need to be resolved.
Claim(s) 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Matsuki, Tokita, and Ono, and further in view of Liao (U.S. Pre-Grant Publication No. 2016/0042229).
Regarding Claim 16:
Matsuki, Ono, and Tokita explicitly teach all of the elements of the claimed invention as recited above except:
Wherein the image data is divided in numbers of pages specified by a user.
However, in the related field of endeavor of image filing by scanning, Liao teaches:
Wherein the image data is divided in numbers of pages specified by a user.
Liao teaches “when the user needs to scan a pile of documents, the user can insert a blank page, a specific color page or a barcode to split the documents. When the scanner scans the blank page, the specific color page, or the barcode, the scanner automatically combines the scanned image data as an independent file.” (Para. [0007]) thereby teaching dividing the image data in numbers of pages specified by a user inserting the blank page, color page, or barcode to split the documents.
Thus, it would have been obvious to one of ordinary skill in the art, having the teachings of Liao, Matsuki, Ono, and Tokita at the time that the claimed invention was effectively filed, to have combined the different options used for dividing scanned image data, as taught by Liao, with the renaming tool interface, as taught by Matsuki, the renaming of duplicate file names/titles, as taught by Ono, and the systems and methods for capturing and transferring electronic files, as taught by Tokita.
One would have been motivated to make such combination because Tokita teaches a system that “divides the scanned image data including the plurality of pages for each document” (Para. [0048]) without providing details as to how the dividing is performed and Liao teaches further details as to how by teaching “when the user needs to scan a pile of documents, the user can insert a blank page, a specific color page or a barcode to split the documents. When the scanner scans the blank page, the specific color page, or the barcode, the scanner automatically combines the scanned image data as an independent file” (Para. [0007]).
Regarding Claim 17:
Liao, Matsuki, Ono, and Tokita further teach:
Wherein the image data is divided with barcodes recognized in the image data.
Liao teaches “when the user needs to scan a pile of documents, the user can insert a blank page, a specific color page or a barcode to split the documents. When the scanner scans the blank page, the specific color page, or the barcode, the scanner automatically combines the scanned image data as an independent file.” (Para. [0007]).
Regarding Claim 18:
Liao, Matsuki, Ono, and Tokita further teach:
Wherein the image data is divided with white sheets recognized in the image data.
Liao teaches “when the user needs to scan a pile of documents, the user can insert a blank page, a specific color page or a barcode to split the documents. When the scanner scans the blank page, the specific color page, or the barcode, the scanner automatically combines the scanned image data as an independent file.” (Para. [0007]).
Response to Amendment
Applicant’s Amendments, filed on 12/3/2025, are acknowledged and accepted.
In light of Applicant’s Amendments filed on 12/3/2025, the claim objection to claims 1, 7, and 8 has been withdrawn.
Response to Arguments
On page 8 of the Remarks filed on 12/3/2025, Applicant argues that “claims 1-3 and 5-8 do not recite mental processes ("concepts performed in the human mind") because claims 1-3 and 5-8 recite limitations that cannot be practically performed in the human mind” because “For example, claim 7 recites, in part, the following: "displaying a screen for setting a rule relating to a file name, the screen including items which can be selected by a user and include a first item corresponding to a region specified in scanned image data by a user and a second item corresponding to an attribute; ... displaying another screen including an object corresponding to the set rule; [and] obtaining image data by a single scan performed based on a selection of the displayed object corresponding to the set rule." And these limitations cannot be practically performed in the human mind.”Applicant’s argument is not convincing because aspects of the limitations that cannot practically be performed in the human mind, such as displaying a screen with particular information, is understood as a high-level recitation of generic computer components and represents mere instructions to apply on a computer as in MPEP 2106.05(f), which does not provide integration into a practical application.
On pages 9-10 of the Remarks filed on 12/3/2025, Applicant argues that Tokita, Ono, and Cazier do not teach all off the amended limitations of independent claims 1, 7, and 8.
Applicant’s argument is convincing and the amendments necessitated the new grounds of rejection presented herein.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Bhattal et al. (U.S. Pre-Grant Publication No. 2011/0153622) teaches a method for differentiating two or more data sets having common data set identifiers, the method comprising the steps of: selecting a plurality of data sets comprising one or more data elements, each data set being associated with a data set identifier; identifying in the selected plurality of data sets a group of the data sets having a common data set identifier; comparing each data set in the group with each other data set in the group so as to identify one or more differentiating characteristics between the data sets in the group; and associating difference data representing one or more of the identified differentiating characteristics with the corresponding data set so as to provide one or more differentiators between two or more of data sets of the group.
Honma (U.S. Pre-Grant Publication No. 2009/0122336) teaches an information processing apparatus stores therein selected to-be-transmitted image data and the predetermined transmission order of image data, and displays a list of the stored to-be-transmitted image data according to the predetermined transmission order. When receiving an instruction to edit the transmission order of a displayed list of image data, the information processing apparatus displays stored to-be-transmitted image data according to an edited transmission order, and stores the edited transmission order. When receiving an instruction to fax, the information processing apparatus combines to-be-transmitted image data into a single image data to transmit the combined image data to an image transmitting apparatus, based on the stored edited transmission order.
Tokita (U.S. Pre-Grant Publication No. 2019/0102385) teaches a display control unit that displays a UI screen for performing a predetermined process, the UI screen being a screen where unit blocks each assumed as a single continuous character string in a scan image are displayed to be selectable by the user; an OCR process unit that performs an OCR process on unit blocks selected by the user through the UT screen to thereby extract character strings; and a setting unit that sets incidental information for the predetermined process by using the character strings extracted by the OCR process unit. The display control unit separates a character string satisfying a predetermined separation condition among the character strings extracted by the OCR process unit from the unit blocks selected by the user, and displays the separated character strings on the UI screen such that the separated character strings are selectable by the user.
Torikai et al. (U.S. Pre-Grant Publication No. 2012/0200716) teaches an information processing apparatus which is connectable with an image supplying apparatus having a unit which acquires position information, comprising: a file acquiring unit which acquires, from the image supply apparatus, an image file having shooting position information attached and a log file indicating locations along a path of movement of the image supplying apparatus; a file designating unit which designates an image file and a log file to be transferred from the image supplying apparatus; and a determining unit which determines, in a case that a file to be transferred is designated by the file designating unit, whether the designated file has already been transferred, wherein the determining unit determines whether the file has already been transferred, in accordance with a determination procedure that differs between a case where the designated file is an image file and a case where the designated file is a log file.
Kaneko et al. (U.S. Patent No. 11,221,988) teaches a file management device that makes it possible to assign a file name according to a user's preference. There is provided a file management device includes: a memory; and a processor coupled to the memory and the processor configured to: presume a naming rule of the file names based on file names of data files present in a folder; register the naming rule, presumed by the rule presuming section, in a rule storages section in association with the folder; and assign a file name to a data file according to a naming rule associated with a folder in which the data file is present among the naming rules stored in the rule storage section.
Watanabe (U.S. Patent No. 7,752,163) teaches an image pickup apparatus includes an image pickup unit configured to capture an image of an object; a storing unit configured to store file path information of the image; a specifying unit configured to specify a file path for the image based on the file path information; a determination unit configured to determine whether a second image having a second file path that is the same as the file path exists on an external recording medium; a file path changing unit configured to, if it is determined by the determination unit that the second image having the second file path that is the same as the file path exists on the external recording medium, change the second file path recorded on the external recording medium; and a recording control unit configured to record the image on the external recording medium.
Lotz (U.S. Patent No. 8,427,684) teaches receiving a print job including a plurality of print files, processing a first resource in a first print file by determining if the name of the first resource matches a name of a previously processed resource, determining if data within the first resource matches data within a previously processed resource if the name of the first resource matches the name of the previously processed resource and renaming the first resource if the data within the first resource does not match data within the previously processed resource.
Hashimoto et al. (U.S. Pre-Grant Publication No. 2024/0275900) teaches receive, from a reading unit that reads a document having multiple pages, multiple page data individually corresponding to the multiple pages of the document; and transmit, to an external storage, the multiple page data to be stored in the external storage. The circuitry starts the transmitting, to an external storage, the multiple page data to be stored in the external storage before reading all pages of the document completes. The circuitry receives an edit instruction for editing page data transmitted to the external storage among the multiple page data; and transmits, to the external storage, edit data for reflecting the edit instruction in a content to be displayed based on the multiple page data read from the external storage, so as to cause the external storage to execute processing corresponding to the edit instruction for editing the page data having been transmitted.
Inoue (U.S. Pre-Grant Publication No. 2021/0306476) teaches to solve a troublesome setting operation required in transmitting image data to a cloud storage, a setting screen is provided in which a setting made by a user in the past for image data that is similar to the image data to be transmitted is reflected, when the user makes a setting for transmitting the image data to be transmitted to the cloud storage.
Ito (U.S. Pre-Grant Publication No. 2022/0201146) teaches “the image processing unit 432 obtains the file name setting information from the request control unit 431. The file name setting information obtained in the embodiment includes not only the information on the character strings used in the file name and the character regions of the character strings but also information on a user ID identifying the target user and information on presence or absence of correction on the determined character strings made by the target user.” (Para. [0117]).
Matsumoto (U.S. Pre-Grant Publication No. 2019/0065843) teaches setting a file name and the like by using a character string obtained by performing OCR processing to a scan image, appropriate conditions can be set according to a character string to be scanned so as to increase a character recognition rate. There is provided an apparatus for performing a predetermined process to a scan image obtained by scanning a document, including: a display control unit configured to display a UI screen for performing the predetermined process, the UI screen displaying a character area assumed to be one continuous character string in the scan image in a selectable manner to a user; and a setting unit configured to determine a condition for OCR processing based on selection order of a character area selected by a user via the UI screen and a format of supplementary information for the predetermined process, perform OCR processing by using the determined condition for OCR processing to the selected character area, and set supplementary information for the predetermined process by using a character string extracted in the OCR processing.
Foreign Patent Publication JP 6818234 B2 teaches “other rule settings include automatic generation on / off setting (setting whether to allow automatic generation of a file name based on the automatic generation rule setting) and automatic generation rule setting as setting items. The auto-generated rule settings include fixed character strings and additional information settings. The fixed character string is a static character string (immediate value) to be included in the file name, and the additional information setting specifies a dynamic character string to be added to the fixed character string. As additional information settings, the date and time, the number of document pages, the user name, the box name, and the like can be specified.” (Page 3 Last Paragraph)
Kanada (U.S. Pre-Grant Publication No. 2019/0197337) teaches “the file name generation unit 21 generates a file name by setting a specific character string as a prefix, setting the date information as a suffix, and coupling the specific character string and the date information with an under bar “_”, a hyphen “-”, or the like. In a case in which “invoice” is detected as a specific character string from the character strings in data through the searching in Step S120, and “09/30/17” is extracted as the date information in Step S130, for example, a file name “invoice_2017/09/30” can be generated.” (Para. [0046]).
Arakawa (U.S. Pre-Grant Publication No. 2019/0266397) teaches “Each registered document image has, appended thereto, information used for scan assist processing, such as a result of block selection processing performed on each piece of image data and a file name assignment rule for the image data. Pieces of information appended to each registered document image are managed with a table such as that illustrated in FIG. 13A.” (Para. [0043]).
Liao (U.S. Pre-Grant Publication No. 2015/0046488) teaches “In the step 401, a keyword setting indication is received. The keyword setting indication is, for example, displayed on a user operation interface, on which the user can set up an indication for searching the keyword string. The user operation interface may comprise a display and an input device of a scanner or a peripheral, or a computer connected to the scanner, and the user can use the software operation on the computer to indicate the scanner. In the step 402, the keyword string in the initial scan image data is searched according to the keyword setting indication, and an encoded string after the keyword string is identified. For example, the keyword string is searched by way of an optical character recognition (OCR) or an intelligent character recognition (ICR). In the step 402, if the keyword string is found, then the step 403 is performed to automatically set up the file name of the initial file according to the encoded string.” (Para. [0026]).
Sakata (U.S. Pre-Grant Publication No. 2020/0280649) teaches an image processing apparatus recognizes a character string included in image data generated by a reading unit, displays the recognized character string, receives a selection, performed by a user, of the displayed character string. The image processing apparatus thereafter determines, as a storage destination for the image data, a folder named with the character string that is based on the received selection and thereby stores the image data in the determined storage destination. Additionally, the image processing apparatus, in response to a reading instruction being issued once, reads images of a plurality of documents to generate image data, receives a plurality of times selection, performed by the user, of the displayed character string, and determines, as storage destinations, a plurality of folders named with the respective character strings that are based on the selection received a plurality of times.
Konishi (U.S. Pre-Grant Publication No. 2021/0232541) teaches a determiner that determines an area including a handwritten figure from image data; a recognizer that recognizes a handwritten character from the handwritten figure; an acquirer that acquires a file name; and a file generator that generates a file with a file name based on a handwritten character when the recognizer recognizes the handwritten character based on the image data and generates a file with the file name acquired by the acquirer when the recognizer does not recognize a handwritten character.The reference further teaches “First, the controller 100 acquires the image data of the handwriting area that is targeted for the character recognition process by the character recognizer 104 at Step S108 (Step S4002). Specifically, the controller 100 reads the image data on the first page of the document from the image data storage area 172 and further reads the handwriting area information from the handwriting area information storage area 174. The controller 100 may identify the area specified by the handwriting area information from the image data on the first page of the document and acquire the image data on the identified area.” (Para. [0096]).
Japanese Patent Document ID JP2014120063A teaches “ the name rule of the file name of scanned data is set for each scanner by the administrator via the network. In this case, if the same name rule is set for each scanner, the name will be duplicated when the file is saved on the save server. In order to avoid this, duplication is avoided by adding the name of each scanner to the file name.” and “OCR processing may be performed on a scanned image, and a slip number recognized from the image may be used as a file name creation item, or barcode information in the image may be used as a file name creation item.”.
Liu et al. (U.S. Pre-Grant Publication No. 2004/0208371) teaches discriminating between documents scanned in a batch scanning process is achieved based on various analyses of the constituent document pages. The data provided by the various analyses are compared with each other to determine whether successive pages belong to the same document. Scanned documents result in a page sequence that is analyzed to extract one or more feature attributes for each page. The feature attributes are provided to a feature comparison process in order to assess the similarity of successive pages. If a sufficient likelihood of similarity is found, the compared pages are deemed to be from the same document; otherwise, they are deemed to be from different documents, indicating the existence of a document break. Based on the document breaks, separate scan files may be established. In this manner, the present invention represents eliminates the requirement of user intervention.
Miyamoto (U.S. Patent No. 11,330,119) teaches to make it possible for a user to easily modify the recognition state of a document by document recognition processing at the time of multi-cropping processing. A preview screen is displayed on a user interface, which displays the results of the document recognition processing for a scanned image obtained by scanning a plurality of documents en bloc on the scanned image in an overlapping manner. Then, a button for dividing the detected document area is displayed on the preview screen so that it is made possible for a user to easily perform division.
Nakano (U.S. Patent No. 12,069,218) teaches a storage unit configured to store scan data read from a plurality of documents; and a processing unit configured to acquire, based on the scan data, identification information corresponding to an identification code present in the document, and generate, based on the scan data, extraction data obtained by collecting electronic data of the documents associated with the identification information among the plurality of documents. The processing unit analyzes the extraction data and performs processing corresponding to an analysis result.
Pyla et al. (U.S. Pre-Grant Publication NO. 2022/0201148) teaches allowing a user to select and send multiple scanned documents to multiple destinations in a single submission. The method includes receiving multiple scan jobs separated using a pre-defined separator at a multi-function device. Each scan job includes a document having one or more pages. The multiple scan jobs are scanned to generate multiple scanned documents, where each scanned document is generated corresponding to a single scan job. Then, each scanned document and corresponding multiple destinations are displayed to a user via a user interface for selection. Based on the user selection, each scanned document is sent to the multiple selected destinations in a single submission.The reference further teaches “A “pre-defined separator” refers to any separator that can be placed between multiple scan jobs to separate scan jobs from each other. For example, the pre-defined separator may be a blank page, which can be a white blank page or a full colored blank page. In another example, the pre-defined separator may be a page including an image (e.g., a pre-defined barcode or a pre-defined Quick Response (QR) code) which is readable as a separator by the multi-function device. These are just 2 examples, but other pre-defined separators as known or later developed separators may be used. In one example, if there are 3 scan jobs, where each scan job represents a document of a single page, then at the end of each document, a blank page is placed. Here one blank page is placed after the first scan job and a second blank page is placed after the second scan job.” (Para. [0017]).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ROBERT F MAY whose telephone number is (571)272-3195. The examiner can normally be reached Monday-Friday 9:30am to 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Boris Gorney can be reached on 571-270-5626. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/BORIS GORNEY/Supervisory Patent Examiner, Art Unit 2154
/ROBERT F MAY/Examiner, Art Unit 2154 3/3/2025