Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In response the response the election/restriction filed on 01/09/2026, the Applicant elected Species I, claims 1-9 to examine. Therefore, Applicant should cancel the claims 10-21.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the "right to exclude" granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Omum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 CFR 3.73(b).
Claims 1-9 are rejected on the ground of nonstatutory obviousness-type double patenting as being unpatentable over claims 1-20 of U.S. Patent 11/977533. Although the conflicting is not patentably distinct from each other because since the claims of the U.S. Patent 11/977533 contains every element of the claims of the instant application, and as such, anticipate the claims of the instant application. (see table below).
Instant Application claim 1
U.S. Pat. 11977533 claim 1
A computer program product for detecting tables and/or tabular data arrangements within an original image, the computer program product comprising a computer readable medium having program instructions embodied therewith, wherein the program instructions are configured to cause a processor, upon execution thereof, to perform a method comprising:
pre-processing the original image to generate processed image data, wherein pre-processing the original image comprises identifying one or more delineating lines depicted in the original image, wherein identifying the one or more delineating lines comprises:
obtaining a third set of rules defining criteria of delineating lines;
evaluating the original image against the third set of rules; and
generating a set of delineating lines based on the evaluation; and
detecting one or more tables and/or one or more tabular data arrangements within the processed image data.
A computer-implemented method for detecting and classifying tables and/or tabular data arrangements within an original image, comprising:
pre-processing the original image to generate processed image data, wherein
pre- processing the original image comprises identifying one or more delineating lines depicted
in the original image, wherein identifying
the one or more delineating lines comprises:
obtaining a third set of rules defining criteria of delineating lines;
evaluating the original image against the third set of rules; and
generating a set of delineating lines based on the evaluation grouping words into phrases;
detecting one or more tables and/or one or more tabular data arrangements within the processed image data; extracting the one or more tables and/or the one or more tabular data arrangements from the processed image data; and classifying either: the one or more extracted tables; portions of the one or more extracted tables; the one or more extracted tabular data arrangements; portions of the one or more extracted tabular data arrangements; or a combination of: the one or more extracted tables; the portions of the one or more extracted tables; the one or more extracted tabular data arrangements; and/or the portions of the one or more extracted tabular data arrangements.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-9 are rejected under 35 U.S.C. 101 because:
At step 1:
Claims 1-9 is directed to a “automated document processing for detecting, extracting, and analyzing table and tabular data” and thus directed to a statutory category.
At step 2A, Prong One:
The claim 1 recites the following limitation directed to an abstract ideas:
“pre-processing the original image to generate processed image data, wherein pre-processing the original image comprises identifying one or more delineating lines depicted in the original image, wherein identifying the one or more delineating lines comprises: obtaining a third set of rules defining criteria of delineating lines; evaluating the original image against the third set of rules; and generating a set of delineating lines based on the evaluation” recites a mental process as using watching and analysis the original image to generate processed image data, wherein analysis the original image comprises identifying (highline) one or more delineating lines depicted in the original image, wherein identifying the one or more delineating lines comprises: obtaining a third set of rules defining criteria of delineating lines (using third set of rules as analyzing the image); evaluating the original image against the third set of rules; and generating a set of delineating lines based on the evaluation.
“detecting one or more tables and/or one or more tabular data arrangements within the processed image data” recites a mental process as recognizing by system one or more tables and/or one or more tabular data arrangements within the processed image data if the data table is idle state, locking the on the table paper, releasing in response to the table indicator change to use state.
The claim 2 recites the following limitation directed to an abstract ideas
Claim 2 recites the mental process such watching and analyzing by eyes the original image comprises grouping words into phrases, and wherein grouping the words into the phrases comprises: determining whether one or more boundaries between textual elements depicted in the original image are characterized by a width greater than an average width of whitespace characters depicted in the original image; and in response to determining at least one of the one or more boundaries is not characterized by a width greater than the average width of the whitespace characters depicted in the original image, grouping the corresponding textual elements to form one or more phrases.
The claim 3 recites the following limitation directed to an abstract ideas
The claim 3 recites the mental process such as watching and analyzing by eyes the original image comprises detecting subpages, wherein recognizing the subpages comprises: obtaining a set of rules (schema or look up) defining criteria of subpages, wherein the criteria of subpages comprise: the original image including a vertical graphical line that spans a vertical extent of a page of a document depicted in the original image; and/or the original image depicting horizontally adjacent regions each having a plurality of textual elements and/or horizontal graphical lines exhibiting at least one common alignment characteristic; and evaluating the original image against the set of rules; and defining one or more subpages within the original image based on the evaluation.
The claim 4 recites the following limitation directed to an abstract ideas
The claim 4 recites the mental process such as watching and analyzing by eyes the original image comprises performing layout analysis (drawing) on the original image, wherein the layout analysis comprises identifying one or more excluded zones within the original image.
The claim 5 recites the following limitation directed to an abstract ideas
The claim 5 recites the mental process such as watching and analyzing by eyes the image data comprises: generating (drawing on paper) a first representation of the original image; identifying one or more horizontal graphical lines depicted in the original image, and/or one or more vertical graphical lines depicted in the original image; identifying one or more gaps in the one or more horizontal graphical lines and/or the one or more vertical graphical lines of the first representation; and restoring the one or more horizontal graphical lines and/or the one or more vertical graphical lines by filling in the one or more gaps.
The claim 6 recites the following limitation directed to an abstract ideas
The claim 6 recites the mental process such as watching and analyzing by eyes the original image comprises generating a first representation of the original image, wherein generating the first representation (drawing) does not create any graphical lines that are not represented in the original image, and wherein the first representation excludes textual characters represented in the original image.
The claim 7 recites the following limitation directed to an abstract ideas
The claim 7 recites the mental process such as watching and analyzing by eyes, to extract the one or more tables and/or the one or more tabular data arrangements from the processed image data
The claim 8 recites the following limitation directed to an abstract ideas
The claim 8 recites the mental process such as watching and analyzing by eyes, classify or group the one or more tables and/or the one or more tabular data arrangements.
The claim 9 recites the following limitation directed to an abstract ideas
The claim 9 recites the mental process such as watching and analyzing by eyes, recognizing the one or more tables and/or the one or more tabular data arrangements comprises: recognizing grid-based; denoting one or more areas within the original image that include a grid-like table and/or a grid-like tabular data arrangement as an excluded zone; and performing non-grid-based detection on portions of the original image that are not denoted as excluded zones.
At step 2A, Prong Two:
The claims recite the following additional elements:
The invention discussed in this claims that processor, execution over the computer program and therefore create a contractual relationship similar to economic practical of Bilski. The rest of the claims is only generic computer components and functions and represent mere instruction to apply to a computer as in MPEP 2106.05 (f) which does not provide integration into a practical application.
At step 2B
The conclusions for the mere implementation using a generic computer and mere field of use are carried over and to not provide significantly more.
With respect to claims 1-9, the claimed inventions are directed to non-statutory subject matter. Claims 1-9 recites "computer readable medium" and the specification fails to define what the readable medium is. The specification in paragraph [0113] and [0126] states that “a computer storage medium” is not to be constructed as transitory signals. However, “storage medium” is not recited in claims 1-9. Therefore, claims 1-9 are directed to non-statutory subject matter.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1 and 7-8 is/are rejected under 35 U.S.C. 103 as being unpatentable over ZHong et al (U.S. Pub. 2021/0117668 A1), and further in view of Sirohey et al. (U.S. Pub. 2005/0256399 A1)
U.S. Pub. 2021/0117668 A1
With respect to claim 1, ZHong et al. discloses a computer program product for detecting tables and/or tabular data arrangements within an original image, the computer program product comprising a computer readable medium having program instructions embodied therewith, wherein the program instructions are configured to cause a processor, upon execution thereof, to perform a method comprising:
pre-processing the original image to generate processed image data (i.e., “The training includes receiving a set of images of tabular data and a set of markup data corresponding respectively to the images of tabular data. The training further includes training a first neural network to delineate the tabular data from the set of images into cells using the markup data.”(0004) and training includes receiving a set of images is pre-processing the original image of claimed invention), wherein pre-processing the original image comprises identifying one or more delineating lines depicted in the original image ((i.e., “The training includes receiving a set of images of tabular data and a set of markup data corresponding respectively to the images of tabular data. The training further includes training a first neural network to delineate the tabular data from the set of images into cells using the markup data.”(0004) and training includes receiving a set of images is pre-processing the original image of claimed invention)), wherein identifying the one or more delineating lines comprises:
obtaining a third set of rules defining criteria of delineating lines (not disclose);
evaluating the original image against the third set of rules (not disclose); and
generating a set of delineating lines based on the evaluation (not disclose) ; and detecting one or more tables and/or one or more tabular data arrangements within the processed image data (i.e.,” the tabulated data is included in the document as an image, without any corresponding information that describes the structure that is used to tabulate the data. The structure indicates how the data is delineated, for example, into rows, columns, cells, and other such components of the table..”(0004) and “The system 100 includes, among other components a content recognition device 120 that receives an input image 112 of tabulated data from an electronic document 110”(0025)). But Zhong et al. does not discloses obtaining a third set of rules defining criteria of delineating lines; evaluating the original image against the third set of rules; and generating a set of delineating lines based on the evaluation; However, Sirohey et al. discloses obtaining a third set of rules defining criteria of delineating lines (i.e., The method includes accessing data of a scan of an object, using at least one characteristic of the accessed data to delineate at least one item of interest in the data and generating a 3D visualization image wherein transparency levels for at least some pixels not representing the item of interest are set according to a first set of rules, and transparency levels for at least some pixels representing an interior portion of the item of interest are set according to a second set of rules different than the first set of rules, and at least some pixels representative of a transition area are set according to a third set of rules different than the first and second sets of rules. ”(0005)); evaluating the original image against the third set of rules (i.e.,” transparency levels of pixels representing transition area 204 between item of interest 202 and rest of image 206 are set according to a third set of rules.”(0019)); and generating a set of delineating lines based on the evaluation (i.e., “of methods for identifying and delineating item of interest 202 within the data include iterative thresholding, k-means segmentation, edge detection, edge linking, curve fitting, curve smoothing, morphological filtering, region growing, fuzzy clustering, image or volume measurements, heuristics, knowledge-based rules, decision trees, neural networks and the like.”(0012)), It would have been obvious for a person of ordinary skill in the art, before the effective filing date of the claimed invention, to include Sirohey et al ‘s feature in order to get accurate the evaluation for the stated purpose has been well known in the art as evidenced by teaching of Sirohey et al.(0031). Both references teach the same field such as extracting information from image.
With respect to claim 7, Zhong et al. discloses further comprising program instructions configured to cause the processor, upon execution thereof, to extract the one or more tables and/or the one or more tabular data arrangements from the processed image data (i.e., “A computer-implemented method for using a machine learning model to automatically extract tabular data from an image includes receiving a set of images of tabular data and a set of markup data corresponding respectively to the images of tabular data. ”(abstract)).
With respect to claim 8, Zhong et al. discloses further comprising program instructions configured to cause the processor, upon execution thereof, to classify the one or more tables and/or the one or more tabular data arrangements (i.e., “one or more embodiments of the present invention facilitate a machine to autonomously understand unstructured tables from various literature that is available in electronic format. ”(0079) and “existing compare and comply system uses a set of manually defined rules to define the table layout and extract content using OCR technology. Embodiments of the present invention facilitate such extraction by using a processing end-to-end that avoids some of the errors accumulated by the different processing steps in the compare and comply system.”(0080)).
Claim(s) 4 is/are rejected under 35 U.S.C. 103 as being unpatentable over ZHong et al (U.S. Pub. 2021/0117668 A1), Sirohey et al. (U.S. Pub. 2005/0256399 A1) and further in view of Bjornerud et al. (U.S. Pub. 2008/0221441 A1)
U.S. Pub. 2021/0117668 A1
With respect to claim 4, Zhong and Sirohey et al. disclose all limitation recited in claim 1 except for wherein pre-processing the original image comprises performing layout analysis on the original image, wherein the layout analysis comprises identifying one or more excluded zones within the original image. However, Biomerud et al. discloses wherein pre-processing the original image comprises performing layout analysis on the original image, wherein the layout analysis comprises identifying one or more excluded zones within the original image (i.e., “a data selection tool for assisting an operator in selecting regions of the tumor whose corresponding values in the map are to be applied in the grading and excluding regions representing large blood vessels and areas of necrosis; ”(0015)). It would have been obvious for a person of ordinary skill in the art, before the effective filing date of the claimed invention, to include Bjornerud et al ‘s feature in order to get useful in physiological image for dynamic susceptibility to indicate reliable indicator of tumor grade for the stated purpose has been well known in the art as evidenced by teaching of Bjornerud et al.(0002). Both references teach the same field such as extracting information from image.
Claim(s) 6 is/are rejected under 35 U.S.C. 103 as being unpatentable over ZHong et al (U.S. Pub. 2021/0117668 A1), Sirohey et al. (U.S. Pub. 2005/0256399 A1) and further in view of Shustorovich et al. (U.S. Pub. 2015/0347836 A1)
U.S. Pub. 2021/0117668 A1
With respect to claim 6, Zhong and Sirohey et al. disclose all limitation recited in claim 1 except for wherein pre-processing the original image comprises generating a first representation of the original image, wherein generating the first representation does not create any graphical lines that are not represented in the original image, and wherein the first representation excludes textual characters represented in the original image. However, Shustorgvich et al. discloses wherein pre-processing the original image comprises generating a first representation of the original image (i.e., “stems, computer program products, and techniques for discriminating hand and machine print from each other, and from signatures, are disclosed and include determining a color depth of an image, the color depth corresponding to at least one of grayscale, bi-tonal and color; reducing color depth of non-bi-tonal images to generate a bi-tonal representation of the image;”(abstract)), wherein generating the first representation does not create any graphical lines that are not represented in the original image (i.e., “removing the true graphical lines from the bi-tonal image or the bi-tonal representation without removing the false positives to generate a component map comprising connected components and excluding graphical lines”(abstract)), and wherein the first representation excludes textual characters represented in the original image (i.e., “ The bi-tonal image is then subjected to a graphical line removal process to eliminate artificial connections between characters, i.e. connections arising from any source other than the shape of the character per se.”(0072) and “ In preferred approaches, graphical line removal includes eliminating any lines fitting the above definition, but also includes reconstructing any estimated characters for which portions thereof were removed in the process of removing the graphical lines. ”(0081)). It would have been obvious for a person of ordinary skill in the art, before the effective filing date of the claimed invention, to include Shustorovich et al ‘s feature in order to have different interface to clearly define the portion for representation image to user for the stated purpose has been well known in the art as evidenced by teaching of Shstorovich et al.(0004). Both references teach the same field such as extracting information from image.
Claim(s) 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over ZHong et al (U.S. Pub. 2021/0117668 A1), Sirohey et al. (U.S. Pub. 2005/0256399 A1) and further in view of Cramer et al. (U.S. Pub. 2020/0097713 A1)
With respect to claim 9, Zhong and Sirohey et al. disclose all limitation recited in claim 1 except for wherein detecting the one or more tables and/or the one or more tabular data arrangements comprises: performing grid-based detection; denoting one or more areas within the original image that include a grid-like table and/or a grid-like tabular data arrangement as an excluded zone; and performing non-grid-based detection on portions of the original image that are not denoted as excluded zones. However, Cramer et al. discloses wherein detecting the one or more tables and/or the one or more tabular data arrangements comprises: performing grid-based detection; denoting one or more areas within the original image that include a grid-like table and/or a grid-like tabular data arrangement as an excluded zone (i.e., “a table refers to a set of graphical grid lines in the document that represent a table of information.”(0025)); and performing non-grid-based detection on portions of the original image that are not denoted as excluded zones (i.e., “In the redaction block processing engine 14, the text fragment module 18 is provided and configured with computer program functionality for reading every page of the input (e.g., PDF, png, jpg, word, etc.) document 10 to detect and read the text, labels, checkbox states, and tables on every page of the document 10, extracting the data in the proper context of the page”(0026)). It would have been obvious for a person of ordinary skill in the art, before the effective filing date of the claimed invention, to include Cramer et al ‘s feature in order to ingest and process redacted electronic document are easy at a any level in term for the stated purpose has been well known in the art as evidenced by teaching of Cramer et al.(0002). Both references teach the same field such as extracting information from image.
Allowable Subject Matter
Claim 2 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and allowance would be contingent on overcoming the rejections under 35 U.S.C. 101. such 35 U.S.C 101, since the prior art of record and considered pertinent to the applicant’s disclosure does not teach or suggest the wherein pre-processing the original image comprises grouping words into phrases, and wherein grouping the words into the phrases comprises: determining whether one or more boundaries between textual elements depicted in the original image are characterized by a width greater than an average width of whitespace characters depicted in the original image; and in response to determining at least one of the one or more boundaries is not characterized by a width greater than the average width of the whitespace characters depicted in the original image, grouping the corresponding textual elements to form one or more phrases.
Claim 3 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and allowance would be contingent on overcoming the rejections under 35 U.S.C. 101, since the prior art of record and considered pertinent to the applicant’s disclosure does not teach or suggest the wherein pre-processing the original image comprises detecting subpages, wherein detecting the subpages comprises: obtaining a set of rules defining criteria of subpages, wherein the criteria of subpages comprise: the original image including a vertical graphical line that spans a vertical extent of a page of a document depicted in the original image; and/or the original image depicting horizontally adjacent regions each having a plurality of textual elements and/or horizontal graphical lines exhibiting at least one common alignment characteristic; and evaluating the original image against the set of rules; and defining one or more subpages within the original image based on the evaluation.
Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and allowance would be contingent on overcoming the rejections under 35 U.S.C. 101, since the prior art of record and considered pertinent to the applicant’s disclosure does not teach or suggest the wherein pre-processing the image data comprises: generating a first representation of the original image; identifying one or more horizontal graphical lines depicted in the original image, and/or one or more vertical graphical lines depicted in the original image; identifying one or more gaps in the one or more horizontal graphical lines and/or the one or more vertical graphical lines of the first representation; and restoring the one or more horizontal graphical lines and/or the one or more vertical graphical lines by filling in the one or more gaps.
Citation of Pertinent References
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
The patent to Liu et al. discloses Device Part Assembly Drawing Image Search Apparatus, U.S. Pub. No. 2006/0082595 A1.
The public patent to Liao et al. discloses Techniques for image content extraction , (U.S. Pub. 2021/0366099 A1)
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HUNG T VY whose telephone number is (571)272-1954. The examiner can normally be reached M-F 8-5.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on (571)272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUNG T VY/Primary Examiner, Art Unit 2163 February 07, 2026