Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Notice to Applicants
This communication is in response to the action filed on 02/14/2024.
The claims 1-11 are pending.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 02/14/2024 has been considered.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitations are:
“a document reading unit” in claim 1. As defined in the specification at paragraph [0055] as an image sensor of a contact image sensor type, including digital cameras.
“a recognition unit” in claim 1. As defined in the specification at paragraph [0057] as a computer program product for document recognition purposes ran on a computing device.
“a difference elimination unit” in claims 1-9. As defined in the specification at paragraph [0057] as a computer program product for document difference elimination purposes ran on a computing device.
“an output unit” in claims 1 and 9. As defined in the specification at paragraphs [0022], [0028], [0046] as a computer function/program which allows the result to be output to an external computing device such as a copier or display.
Because these claim limitations are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, they are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have these limitations interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitations to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitations recites sufficient structure to perform the claimed function so as to avoid them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. All of the claims are method claims (11), apparatus/machine claims (1-10) or manufacture claim (NA) under (Step 1), but under Step 2A prong 1 all of these claims recite abstract ideas and specifically mental processes—concepts performed in the human mind including observation, evaluation, judgement and opinion which are generally described as a human visually observing a label to judge the locations and dimensions of empty regions in order to insert content into these empty regions; furthermore these mental processes are more particularly with method claim 11 used as an example:
Recited in claim 11 as:
Generating second image data…
Recognizing a character included in first image data…
Detecting a difference between first page information and second page information…
Generating second image data in which detected difference is eliminated.
It is noted that the above analysis is according to the 2019 Revised Patent Subject Matter Eligibility Guidance published in the Federal Register (84 FR 50) on January 7, 2019 and MPEP 2106.04(a)(2)(III).
Consider also that “If a claim recites a limitation that can practically be performed in the human mind, with or without the use of a physical aid such as pen and paper, the limitation falls within the mental processes grouping, and the claim recites an abstract idea” as per MPEP 2106.04(a)(2)(III)(B). See also footnotes 14 and 15 of the Federal Register Notice. As detailed above, the steps of content generating, recognizing, detecting, etc. may be practically performed in the human mind with the use of a physical aid such as a pen and paper (generating a second image data of a piece of written text by rewriting it using the pen and paper, the user in one’s mind would perform recognizing of specific characters/letters used in the scanned text document by looking at the characters and recognizing characters of significance, and the user in one’s mind would perform the step of determining a difference between the scanned text document and the generated text document which can be done in the human mind by looking at and comparing the documents after writing both down on a piece of paper using the pen and paper and generic tools of the art, and generating a document where the differences have been eliminated, wherein the document differences would be able to be erased by the user having the pencil and paper/tools of the trade). There are no additional elements for claim 11 as all limitations included in claim 11 represent mental processes.
Under step 2A, prong 2, the claim does not recite any additional elements in order to integrate the judicial exception introduced in the independent claims 1, 10, and 11 as previously stated in prong 1 above, there are no additional elements for claim 11. The claims fail to recite or integrate an additional element and taking independent claims 1 and 10 as an example merely recites the words “to execute” which are interpreted to mean substantially “apply it” (or an equivalent) with the judicial exception, or merely includes instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea and does not integrate the judicial exception. Further the abstract idea fails to make an improvement to the claimed generic computing system in claims 1 and 10 and as such fails to integrate a judicial exception to the claims under step 2A prong 2, taking claim 10 and 1 as the example:
A) “a non-transitory computer-readable storage medium storing an information processing program”, “recognition function”, “difference elimination function”, “output function”. Which all comprise computer program products ran on a generic computing device described in claim 10 and not adding significantly more to the claims.
B) “a computer” as recited in claim 10 and comprises a generic computing component that does not provide significantly more.
C) “a scanning system” as recited in claim 1 and comprises a generic computing component that does not provide significantly more. Scanning systems are stated to be generic to the art in prior art WITHGOTT at column 7, lines 1-28
Under step 2A prong 2, the above identified generically recited computer elements do not add a meaningful limitation to the abstract idea because they amount to simply implementing the abstract idea on a computer. The examiner maintains all of these steps comprise mental process steps which have not been integrated into significantly more by structural/additional claimed elements.
Under Step 2B, this judicial exception is not integrated into a practical application because each of claims 1-11 do not recite additional elements that integrate the exception into a practical application. The only additional elements {a generic scanning system (claim 1) scanning systems are stated to be generic to the art in prior art WITHGOTT at column 7, lines 1-28, a non-transitory computer-readable storage medium storing an information processing program (claim 10), and a computer (claim 10) which comprises a generic computing system} are recited at a high level of generality and merely equate to previously mentioned “to execute”/“apply it” or otherwise merely uses a generic computer and generic computing components as a tool to perform an abstract idea/mental process which are not indicative of integration into a practical application as per MPEP 2106.05(f). The corresponding dependent claims further fail to introduce significantly more to the claims and only include the generic computing components introduced and discussed in the independent claims. See also MPEP 2106.04(a)(2)(III) with respect to Mental Processes: “Nor do the courts distinguish between claims that recite mental processes performed by humans and claims that recite mental processes performed on a computer”. See also MPEP 2106.04(a)(2)(III)(C)(3) Using a computer as tool to perform a mental process and MPEP 2106.04(a)(2)(III)(D) as well as the case law cited therein.
Further, the depending claims do not remedy these deficiencies:
- claims 2-3, 5-8 further recite mental processes which could be performed in the human mind with pen and paper.
- claims 4 and 9 represent post solution activity of book marking and outputting a PDF file respectively.
In other words, the additional elements and/or are recited at a high level of generality that does not amount to significantly more and/ such that they could practically be performed in the human mind. For all of the above reasons, taken alone or in combination, claims 1-11 recite a non-statutory mental process.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-8, and 10-11 are rejected under 35 § U.S.C. 102(a)(1) as being anticipated by US 5,491,760 A to WITHGOTT et al. (hereinafter “WITHGOTT”).
As per claim 1, WITHGOTT discloses a scanning system comprising (method and apparatus for excerpting and summarizing a document image, without first converting the document image to optical character codes such as ASCII text, the system and corresponding method of using identifies significant words, phrases and graphics in the document image using automatic, and interactive morphological image recognition techniques, further as stated document summaries or indices (table of contents) are produced based on the identified significant portions of the document image; abstract; figs 1-3, 8-9; column 6, line 64-column 7, line 28): a document reading unit configured to read a document and generate first image data of a plurality of pages read from the document (the system includes but is not limited to a standard document scanning apparatus and the computing components which correspond to such, adapted to read and scan documents containing text and break them up into “image units” containing document content to read, analyze, and summarize each image unit into a result which is output to a user interface provided over a display; abstract; column 6, lines 25-57; column 6, line 58 - column 7, line 23; column 7, lines 29-column 8, line 16); a recognition unit configured to recognize a character included in the first image data (the computing system is adapted to perform recognition of known character codes of scanned document image files contain text and groups codes into information groups defined as image units, the system does this via word shape signal computer 724 (recognition unit) derives a word shape signal representing the individual words in the image, based on the original image and the bounding box determination, and uses the information in word shape comparator 726 for comparing character/word shape of known words and produces a degree of similarity to known words (the degree of similarity is inverse to difference the lower value of similarity the greater the difference and vice versa) and difference is eliminated by increasing the degree of similarity between the generated document and the scanned document; column 6, lines 25-57; column 7, lines 24-51; column 8, lines 16-64; column 10, lines 18-28; column 12, lines 21-54); a difference elimination unit configured to detect, based on a recognition result obtained by the recognition unit, a difference between first page information obtained from the recognition result and second page information sequentially assigned to image data of the pages (comparator 726 acting as a difference elimination unit is adapted to take words and segments of words divided by bounding boxes of the scanned images and compare the words for similarity (differences) and uses the words shape signals derived from the signal computer 724 acting as the recognition unit in order to determine the degree of similarity between a first and second image unit comprising content including word shapes of the scanned text images and is used as an apparatus for comparing one word shape against another to produce a relative indication of the degree of similarity between the two shapes, and to order them substantially sequentially in the order in which the image units comprising identified word shapes are scanned and summarized, wherein sequentially refers to segmenting the output information by page number/identifiable number and into a left to right/top to bottom reading order for the generated summary word images; column 12, line 21-column 13, line 27; column 21, lines 5-56; column 25, lines 5-62; column 26, lines 9-40; column 29, lines 37-59; column 32, lines 5-16), and to generate table-of-contents information including page information in which the difference is eliminated (further region segmentation image analysis can be performed to generate a physical document (table of contents/index) structure description that divides page images into labelled regions corresponding to auxiliary scanned document elements like page number, figures, tables, footnotes; column 21, lines 5-56; column 25, lines 5-62; column 26, lines 9-40; column 28, lines 51-58; column 29, lines 1-59; column 32, lines 5-16); and an output unit configured to output second image data including the table-of-contents information (wherein the computing system is adapted to output the generated physical document which is a summarized table of contents and is output to a display of the computer interface; column 3, line 51-column 4, line 12; column 6, lines 25-57; column 7, lines 1-28; column 34, lines 40-54).
As per claim 2, WITHGOTT discloses the scanning system according to claim 1, wherein the difference elimination unit generates the table-of-contents information (the word shape signal computer 724, comprises the comparator 726 which generates the degree of similarity between image units containing word shapes and summarizes said information to be used in a generated table of contents; column 3, lines 13-16; column 7, lines 29-column 8, line 6; column 21, lines 5-55), which is table-of-contents information in which the first page information is displayed (I which the generated table of contents includes summarized sections and locations of image unity’s containing content and words to be summarized of the scanned text document; column 7, lines 29-column 8, line 6; column 12, line 21-column 13, line 27; column 21, lines 5-55), including a link on which image data of a page corresponding to the second page information among the image data of the plurality of pages is displayed (further as done in step 50 the related image units may be linked together in the summarized content document allowing the user to jump to related content linked to the textual content summary provided in the generated table of contents and is displayed over the provided user interface via an output display; column 6, lines 25-67; column 7, lines 29-column 8, line 6; column 12, line 21-column 13, line 27; column 21, lines 5-55).
As per claim 3, WITHGOTT discloses the scanning system according to claim 2, wherein the difference elimination unit searches for a heading, which is a start of the first page information, from the recognition result, and generates the table-of-contents information in which the first page information starting from a page in which the heading is present is displayed as a bookmark (the image units identified by the system are to be image content comprising text information which is pertinent to the text being broken down and summarized and this information includes page numbers, headers and headings to be included in the generated table of contents and is further adapted to allow users to mark pages with underlines or highlighting which acts as a bookmark to highlight or mark a section for later reference; column 3, lines 13-21; column 7, lines 29-column 8, line 6; column 21, line 21-column 22, line 50; column 33, lines 14-65).
As per claim 4, WITHGOTT discloses the scanning system according to claim 2, wherein when a table of contents including the first page information is included in the recognition result, the difference elimination unit generates, from the table of contents, the table-of-contents information, which is the table-of-contents information in which the first page information is displayed as a bookmark, including a link on which image data corresponding to the second page information among the image data of the plurality of pages is displayed (the computing system is adapted to via the summary document which is output provide links to the original document from the summary document, the linked information could in turn be highlighted at its original document location and highlighted by the user using the system in order to effectively “book mark” the linked section by highlighting it and using the link to navigate back to the section; column 3, lines 13-21; column 21, line 21-column 22, line 67; column 33, lines 14-65; column 33, lines 14-25).
As per claim 5, WITHGOTT discloses the scanning system according to claim 1, wherein the difference elimination unit generates the table-of-contents information in which the second page information is displayed (the word shape signal computer 724, comprises the comparator 726 which generates the degree of similarity between image units containing word shapes and summarizes said information to be used in a generated table of contents, and includes second page information as one of the plurality of input image units, and at step 50 the related image units may be linked together in the summarized content document allowing the user to jump to related content linked to the textual content summary provided in the generated table of contents and is displayed over the provided user interface via an output display; column 6, lines 5-67; column 7, lines 29-column 8, line 6; column 12, line 21-column 13, line 27; column 21, lines 5-56; column 25, lines 5-62; column 26, lines 9-40; column 28, lines 51-58; column 29, lines 1-59; column 32, lines 5-16).
As per claim 6, WITHGOTT discloses the scanning system according to claim 5, wherein the difference elimination unit identifies, based on the recognition result, a position of the first page information included in the image data of the plurality of pages, and adds the second page information to the image data of the plurality of pages at the position of the first page information (the computing system is adapted to track page position of the image units defined using bounding boxes and does so using a structuring element in order to align the imaging unit to cover the main bodies of text of the scanned document and is tracked as either a “hit” if the defined square 2 by 2 structure element corresponding to an image unit contains words of the scanned text image or a “miss” if the image unit does not contain words and records position of the bounded image units; figs 7, 19-20, and 21A-B; column 8, lines 22-55; column 10, line 18-column 11, line 54; column 17, lines 29-61).
As per claim 7, WITHGOTT discloses the scanning system according to claim 5, wherein the difference elimination unit identifies, based on the recognition result, a position of a main text and a position of the first page information attached to the main text in the image data of the plurality of pages, does not add the second page information to the position of the main text, but adds the second page information to the image data of the plurality of pages at the position of the first page information (based on word shape recognition of an image unit the image unit containing word shapes is identified summarized defined under a section heading and classified to be output as the generated table of contents wherein similar content is linked within the generated table to pages and related information of the summarized content in the specific table section will be linked to the original contents original location including page numbers associated with said content; column 8, lines 33-55; col 13; col 17, lines 45-61; col 21, lines 5-67).
As per claim 8, WITHGOTT discloses the scanning system according to claim 5, wherein the difference elimination unit identifies, based on the recognition result, a position of the page information included in the image data of the plurality of pages (using the computing system and the bounding boxes aided by a structural element to determine word shapes within bounding boxes to be input into word shape signal computer 724 (recognition unit) derives a word shape signal representing the individual words in the image, based on the original image and the bounding box determination, and uses the information in word shape comparator 726 acting as the difference elimination unit for comparing character/word shape of known words and produces a degree of similarity to known words; column 6, lines 25-57; column 7, lines 24-51; column 8, lines 16-64; column 10, lines 18-28; column 12, lines 21-54), determines, based on the recognition result, whether the page information included in the image data of the plurality of pages indicates a page of the image data of the plurality of pages or indicates a page of another document (and based on the degree of similarity determined by the comparator the image units are classified into sections relating to similar content/information of the scanned text and is linked so that the genet=rated table of contents document provides easy access and links to the original document content was summarized form of form the table of contents; column 12, line 21-column 13, line 27; column 21, lines 5-56; column 25, lines 5-62; column 26, lines 9-40; column 29, lines 37-59; column 32, lines 5-16), does not add the second page information to a position where the page information included in the image data of the plurality of pages indicates the page of another document (the system is adapted to after the image units are used to establish bounding boxes around scanned text information of interest and the noise or error which is identified within bounding boxes may be eliminated or deleted so that unneeded noise or incorrect information is not included in the generated index/table of contents of summarized text information of the scanned documents; column 18, lines 24-58; column 21, lines 5-56; column 25, lines 5-62), but adds the second page information to the image data of the plurality of pages at a position where the page information included in the image data of the plurality of pages indicates the page of the image data of the plurality of pages (the computing system is adapted to add and link to the generated table of contents information that is determined to be significant by region segmentation image analysis can be performed to generate a physical document structure description that divides page images into labelled regions and can provide links to corresponding to auxiliary/original scanned document elements like figures, tables, footnotes; column 3, lines 9-25; column 21, lines 5-56; column 23, lines 1-20).
As per claim 10, WITHGOTT discloses a non-transitory computer-readable storage medium storing an information processing program (a computing system which comprises execution processing means for performing functions by executing program instructions in a predetermined manner contained in a memory means; column 4, lines 1-4), the program causing a computer to execute: a recognition function of recognizing a character included in first image data of a plurality of pages read from a document (the system includes but is not limited to a standard document scanning apparatus and the computing components which correspond to such, adapted to read and scan documents containing text and break them up into “image units” containing document content to read, analyze, and summarize each image unit into a result which is output to a user interface provided over a display; abstract; column 6, lines 25-57; column 6, line 58 - column 7, line 23; column 7, lines 29-column 8, line 16); a difference elimination function of detecting, based on a recognition result obtained by the recognition function (comparator 726 acting as a difference elimination unit is adapted to take words and segments of words divided by bounding boxes of the scanned images and compare the words for similarity (differences) and uses the words shape signals derived from the signal computer 724 acting as the recognition unit in order to determine the degree of similarity between a first and second image unit comprising content including word shapes of the scanned text images and is used as an apparatus for comparing one word shape against another to produce a relative indication of the degree of similarity between the two shapes, and to order them substantially sequentially in the order in which the image units comprising identified word shapes are scanned and summarized, wherein sequentially refers to segmenting the output information by page number/identifiable number and into a left to right/top to bottom reading order for the generated summary word images; column 12, line 21-column 13, line 27; column 21, lines 5-56; column 25, lines 5-62; column 26, lines 9-40; column 29, lines 37-59; column 32, lines 5-16), a difference between first page information obtained from the recognition result and second page information sequentially assigned to image data of the pages (the computing system is adapted to perform recognition of known character codes of scanned document image files contain text and groups codes into information groups defined as image units, the system does this via word shape signal computer 724 (recognition unit) derives a word shape signal representing the individual words in the image, based on the original image and the bounding box determination, and uses the information in word shape comparator 726 for comparing character/word shape of known words and produces a degree of similarity (the degree of similarity is inverse to difference the lower value of similarity the greater the difference and vice versa); column 6, lines 25-57; column 7, lines 24-51; column 8, lines 16-64; column 10, lines 18-28; column 12, lines 21-54), and generating table-of-contents information including page information in which the difference is eliminated (further region segmentation image analysis can be performed to generate a physical document (table of contents/index) structure description that divides page images into labelled regions corresponding to auxiliary scanned document elements like page number, figures, tables, footnotes; column 21, lines 5-56; column 25, lines 5-62; column 26, lines 9-40; column 28, lines 51-58; column 29, lines 1-59; column 32, lines 5-16); and an output function of outputting second image data including the table-of-contents information (wherein the computing system is adapted to output the generated physical document which is a summarized table of contents and is output to a display of the computer interface; column 3, line 51-column 4, line 12; column 6, lines 25-57; column 7, lines 1-28; column 34, lines 40-54).
As per claim 11, WITHGOTT discloses a method for generating second image data (method and apparatus for taking an input document as first image data and generating second image data as a ordered table of contents/index by excerpting and summarizing a document image, without first converting the document image to optical character codes such as ASCII text, the system and corresponding method of using identifies significant words, phrases and graphics in the document image using automatic, and interactive morphological image recognition techniques, further as stated document summaries or indices (table of contents) are produced based on the identified significant portions of the document image; abstract; figs 1-3, 8-9; column 3, lines 1-25; column 6, line 64-column 7, line 28; column 21, lines 21-56), the method comprising: recognizing a character included in first image data of a plurality of pages read from a document (the system includes and utilizes a standard document scanning apparatus and the computing components which correspond to such, adapted to read and scan documents containing text and break them up into “image units” containing document content to read, analyze, and summarize each image unit into a result which is output to a user interface provided over a display and is used to perform recognizing of character/word shapes in order to determine bounding boxes for image units; abstract; column 6, lines 25-57; column 6, line 58 - column 7, line 23; column 7, lines 29-column 8, line 16); detecting a difference between first page information obtained from a recognition result (using the computing system is adapted to perform recognition of known character codes of scanned document image files contain text and groups codes into information groups defined as image units, the system does this via word shape signal computer 724 (recognition unit) derives a word shape signal representing the individual words in the image, based on the original image and the bounding box determination, and uses the information in word shape comparator 726 for comparing character/word shape of known words and produces a degree of similarity to known words; column 6, lines 25-57; column 7, lines 24-51; column 8, lines 16-64; column 10, lines 18-28; column 12, lines 21-54) and second page information sequentially assigned to image data of the pages (and further using via the system comparator 726 acting as a difference elimination unit is adapted to take words and segments of words divided by bounding boxes of the scanned images and compare the words for similarity (differences) and uses the words shape signals derived from the signal computer 724 acting as the recognition unit in order to determine the degree of similarity between a first and second image unit comprising content including word shapes of the scanned text images and is used as an apparatus for comparing one word shape against another to produce a relative indication of the degree of similarity between the two shapes, and to order them substantially sequentially in the order in which the image units comprising identified word shapes are scanned and summarized, wherein sequentially refers to segmenting the output information by page number/identifiable number and into a left to right/top to bottom reading order for the generated summary word images; column 12, line 21-column 13, line 27; column 21, lines 5-56; column 25, lines 5-62; column 26, lines 9-40; column 29, lines 37-59; column 32, lines 5-16); and generating second image data in which the detected difference is eliminated (via the computing system generating a physical document (table of contents/index) structure description that divides page images into labelled regions corresponding to auxiliary scanned document elements like page number, figures, tables, footnotes; column 21, lines 5-56; column 25, lines 5-62; column 26, lines 9-40; column 28, lines 51-58; column 29, lines 1-59; column 32, lines 5-16).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or non-obviousness.
Claim 9 is rejected under 35 § U.S.C. 103 as being obvious over US 5,491,760 A to WITHGOTT et al. (hereinafter “WITHGOTT”) in view of US 2022/0319219 A1 to Tsibulevskiy et al. (hereinafter “Tsibulevskiy”).
As per claim 9, WITHGOTT discloses the scanning system according to claim 1. WITHGOTT fails to disclose wherein the output unit outputs the second image data as a PDF file.
Tsibulevskiy discloses wherein the output unit outputs the second image data as a PDF file (the system is adapted to output results as a PDF file available to view and interact with/send to other users/parties using a user interface provided by the computing system; fig 10A paragraphs [0020], [0027-0028], [0218]).
It would have been obvious to a person of ordinary skill in the art, before the effective filing date of the claimed invention to modify WITHGOTT to have wherein the output unit outputs the second image data as a PDF file of Tsibulevskiy reference. The Suggestion/motivation for doing so would have been to provide final readable output in a universally adopted format such as PDF format so result file may be easily viewed and sent to interested parties to the summarized content as suggested by Tsibulevskiy at paragraph [0000]. Further, one skilled in the art could have combined the elements as described above by known method with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Therefore, it would have been obvious to combine Tsibulevskiy with WITHGOTT to obtain the invention as specified in claim 9.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. These prior arts include the following:
US 2006/0282760 A1
US 2020/0396340 A1
US 2022/0318224 A1
Any inquiry concerning this communication or earlier communications from the examiner should be directed to DEVIN JACOB DHOOGE whose telephone number is (571) 270-0999. The examiner can normally be reached 7:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew Bee can be reached on (571) 270-5183. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800- 786-9199 (IN USA OR CANADA) or 571-272-1000.
/Devin Dhooge/
USPTO Patent Examiner
Art Unit 2677
/ANDREW W BEE/Supervisory Patent Examiner, Art Unit 2677