DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-20 are pending in the application.
Claim Objections
Claim 7, a method claim dependent upon claim 1, recites “[T]he computer-implemented method of claim 1, further comprising: an entity relationship mapping engine, a natural language processing engine; and a legal entity recognition engine”. These limitations do not actively recite acts or function. Clarification is required.
Nonstatutory Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the claims at issue are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); and In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the reference application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The USPTO internet Web site contains terminal disclaimer forms which may be used. Please visit http://www.uspto.gov/forms/. The filing date of the application will determine what form should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to http://www.uspto.gov/patents/process/file/efs/guidance/eTD-info-I.jsp.
Claim 1 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 1 of U.S. Patent No. 11,804,057 B1. Although the claims at issue are not identical, they are not patentably distinct from each other because the following reasons.
Listed in the following table is a limitation-to-limitation comparison of the examined claim 1 and the conflicting claim 1.
Application being examined 18/492,369 (hereafter ‘369 application)
Conflicting patent 11,804,057 B1 (hereafter ‘057 patent)
Claim 1
1. A computer-implemented method comprising:
identifying, by at least one processor of a digital asset generation platform, at least one data object model of one or more data object models for at least one digital file in at least one digital format for a digital representation of one or more documents; (see right column underlined text)
detecting, by the at least one processor of the digital asset generation platform, for each respective fiducial marking of one or more fiducial markings overlaid on one or more data elements in the at least one digital file based at least in part on the at least one data object model and associated with a respective spatial correlation meeting or exceeding a loss metric criterion, a dynamic mapping between the respective fiducial marking and a respective data element of the one or more data elements; (see right column bold text) and
generating, by the at least one processor of the digital asset generation platform, a digital asset representative of the one or more documents based on one or more key value data pairs extracted from the dynamic mapping between each fiducial marking of the one or more fiducial markings and the respective data element of the one or more data elements. (see right column underlined bold text)
Claim 1
A computer-implemented method comprising:
ingesting, by at least one processor of a digital asset generation platform, an ingest input that comprises at least one digital file in at least one digital format for a digital representation of one or more documents;
utilizing, by the at least one processor of the digital asset generation platform, a document identification engine corresponding to a first stage of a multi-stage convolutional neural network for identifying one or more document types of the one or more documents by automatically:
identifying at least one data object model of one or more data object models for the one or more documents;
wherein the at least one data object model corresponds to the one or more document types;
utilizing, by the at least one processor of the digital asset generation platform, an object detector engine corresponding to a second stage of the multi-stage convolutional neural network for detecting a dynamic mapping in the at least one digital file by automatically:
iteratively generating, by the at least one processor of the digital asset generation platform, one or more fiducial markings overlaid on one or more data elements in the at least one digital file based at least in part on the at least one data object model;
identifying, for each of the one or more fiducial markings, a spatial correlation between:
a respective fiducial marking of the one or more fiducial markings, and
a respective data element of the one or more data elements; and
detecting, for each respective fiducial marking of the one or more fiducial markings associated with a respective spatial correlation meeting or exceeding a loss metric criterion, the dynamic mapping between the respective fiducial marking and the respective data element of the one or more data elements;
utilizing, by the at least one processor of the digital asset generation platform, a post-processing engine for classifying the dynamic mapping in the at least one digital file by automatically:
extracting one or more key value data pairs from the dynamic mapping between the respective fiducial marking and the respective data element of the one or more data elements; and
dynamically generating, by the at least one processor of the digital asset generation platform, a digital asset representative of the one or more documents based on the one or more key value data pairs extracted from the dynamic mapping between each fiducial marking of the one or more fiducial markings and the respective data element of the one or more data elements.
Therefore claim 1 of the ‘057 patent teaches every limitation in claim 1 of ‘369 application.
Claim 9 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 9 of U.S. Patent No. 11,804,057 B1. Although the claims at issue are not identical, they are not patentably distinct from each other because the following reasons.
Listed in the following table is a limitation-to-limitation comparison of the examined claim 9 and the conflicting claim 9.
Application being examined 18/492,369 (hereafter ‘369 application)
Conflicting patent 11,804,057 B1 (hereafter ‘057 patent)
Claim 9
9. At least one non-transient computer-readable storage medium encoded with computer-executable instructions that, when executed by at least one processor, cause the at least one processor to perform:
identifying at least one data object model of one or more data object models for at least one digital file in at least one digital format for a digital representation of one or more documents (see right column underlined text);
detecting, for each respective fiducial marking of one or more fiducial markings overlaid on one or more data elements in the at least one digital file based at least in part on the at least one data object model and associated with a respective spatial correlation meeting or exceeding a loss metric criterion, a dynamic mapping between the respective fiducial marking and a respective data element of the one or more data elements (see right column bold text); and
generating a digital asset representative of the one or more documents based on one or more key value data pairs extracted from the dynamic mapping between each fiducial marking of the one or more fiducial markings and the respective data element of the one or more data elements. (see right column underlined bold text)
Claim 9
9. At least one non-transient computer-readable storage medium encoded with computer-executable instructions that, when executed by at least one processor, cause the at least one processor to perform:
ingesting an ingest input that comprises at least one digital file in at least one digital format for a digital representation of one or more documents;
utilizing a document identification engine corresponding to a first stage of a multi-stage convolutional neural network for identifying one or more document types of the one or more documents by automatically:
identify at least one data object model of one or more data object models for the one or more documents;
wherein the at least one data object model corresponds to the one or more document types;
utilizing an object detector engine corresponding to a second stage of the multi-stage convolutional neural network for detecting a dynamic mapping in the at least one digital file by automatically:
iteratively generating one or more fiducial markings overlaid on one or more data elements in the at least one digital file based at least in part on the at least one data object model;
identifying, for each of the one or more fiducial markings, a spatial correlation between:
a respective fiducial marking of the one or more fiducial markings, and
a respective data element of the one or more data elements; and
detecting, for each respective fiducial marking of the one or more fiducial markings associated with a respective spatial correlation meeting or exceeding a loss metric criterion, the dynamic mapping between the respective fiducial marking and the respective data element of the one or more data elements;
utilizing a post-processing engine for classifying the dynamic mapping in the at least one digital file by automatically:
extracting one or more key value data pairs from the dynamic mapping between the respective fiducial marking and the respective data element of the one or more data elements;
dynamically generating a digital asset representative of the one or more documents based on the one or more key value data pairs extracted from the dynamic mapping between each fiducial marking of the one or more fiducial markings and the respective data element of the one or more data elements; and
automatically training the multi-stage convolutional neural network to generate the digital asset based on one or more feedback inputs for the digital asset.
Therefore claim 9 of the ‘057 patent teaches every limitation in claim 9 of ‘369 application.
Claim 16 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 16 of U.S. Patent No. 11,804,057 B1. Although the claims at issue are not identical, they are not patentably distinct from each other because the following reasons.
Listed in the following table is a limitation-to-limitation comparison of the examined claim 16 and the conflicting claim 16.
Application being examined 18/492,369 (hereafter ‘369 application)
Conflicting patent 11,804,057 B1 (hereafter ‘057 patent)
Claim 16
16. A system comprising:
a non-transient computer memory, storing software instructions; and
at least one processor of a computing device associated with a user;
wherein when the at least one processor executes the software instructions, the computing device is programmed to:
identify at least one data object model of one or more data object models for at least one digital file in at least one digital format for a digital representation of one or more documents (see right column underlined text);
detect, for each respective fiducial marking of one or more fiducial markings overlaid on one or more data elements in the at least one digital file based at least in part on the at least one data object model and associated with a respective spatial correlation meeting or exceeding a loss metric criterion, a dynamic mapping between the respective fiducial marking and a respective data element of the one or more data elements (see right column bold text); and
extract one or more key value data pairs from the dynamic mapping between the respective fiducial marking and the respective data element of the one or more data elements. (see right column underlined bold text)
Claim 16
16. A system comprising:
a non-transient computer memory, storing software instructions; and
at least one processor of a computing device associated with a user;
wherein when the at least one processor executes the software instructions, the computing device is programmed to:
ingest, by the at least one processor of a digital asset generation platform, an ingest input that comprises at least one digital file in one or more digital formats for a digital representation of one or more documents;
utilize a document identification engine corresponding to a first stage of a multi-stage convolutional neural network for identifying one or more document types of the one or more documents by automatically:
identifying at least one data object model of one or more data object models for the one or more documents;
wherein the at least one data object model corresponds to the one or more document types;
utilize an object detector engine corresponding to a second stage of the multi-stage convolutional neural network for detecting a dynamic mapping in the at least one digital file by automatically:
generating one or more fiducial markings overlaid on one or more data elements in the at least one digital file based at least in part on the at least one data object model;
identify, for each of the one or more fiducial markings, a spatial correlation between:
a respective fiducial marking of the one or more fiducial markings, and
a respective data element of the one or more data elements; and
detecting, for each respective fiducial marking of the one or more fiducial markings associated with a respective spatial correlation meeting or exceeding a loss metric criterion, the dynamic mapping between the respective fiducial marking and the respective data element of the one or more data elements; and
utilizing a post-processing engine for classifying the dynamic mapping in the at least one digital file by automatically:
extracting one or more key value data pairs from the dynamic mapping between the respective fiducial marking and the respective data element of the one or more data elements.
Therefore claim 16 of the ‘057 patent teaches every limitation in claim 16 of ‘369 application.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-6, 9-14 and 16-20 is/are rejected under 35 U.S.C. 102(a)(1)/(a)(2) as being anticipated by Patton et al. (US 20210089712 A1, hereafter Patton).
As per claim 1, Patton teaches a computer-implemented method (Abstract; FIG. 2) comprising:
identifying, by at least one processor (FIG. 7 #704) of a digital asset generation platform, at least one data object model of one or more data object models for at least one digital file in at least one digital format for a digital representation of one or more documents (FIG. 3A #310; para. [0083] “In block 310, the rotation engine 118, for example, may convert the received document into an image (e.g., a JPG, PNG, GIF, BMP, or any other format of image file) in order to process the document as a pixel optimization problem”; FIG. 5A displaying a portion of an example document 500 that is accessed for ingestion; Patton accesses a document of a document type and determines a template to apply to the document. See para. [0016] “accessing a first document of a first document type; determining a template associated with the first document type, the template including a plurality of graphical portions associated with input fields”; para. [0098] “For example, the ingestion engine 126 may access a template data store storing a plurality of templates associated with different types of documents (e.g., specific forms or records used by a business)”; The template and associated property for a document corresponds to a data object model);
detecting, by the at least one processor of the digital asset generation platform, for each respective fiducial marking of one or more fiducial markings overlaid on one or more data elements in the at least one digital file based at least in part on the at least one data object model and associated with a respective spatial correlation meeting or exceeding a loss metric criterion, a dynamic mapping between the respective fiducial marking and a respective data element of the one or more data elements (After normalizing the document image (e.g. by rotation, see FIG. 3A #318), and defining one or more template input fields (FIG. 4), data values are extracted from the document (Fig. 3B #315, 325 and 335). The extraction comprises comparing a first position of a template data field to a second position of a graphical portion associated with the first normalized document (para. [0025]; para. [0100] “The data extraction system may detect input fields in the received document by comparing the x-y coordinates of a user-defined template data field to information presented in the same x-y coordinate in the received document” (Here user-defined template data field and associated x-y coordinate is considered “fiducial marking”; see FIG. 4 #420, 424); para. [0101] “For example, the ingestion engine 126 may overlay the template onto the received document and detect an overlap between the user-defined template data fields and information contained in the received document. For example, a rectangular template field may align with text on a document. If the text on the document is not entirely within the template field, the ingestion engine 126 may extract data based at least partly on a level of overlap between the identified text and the template field”; Patton further teaches data values may be extracted if a graphical portion associated with the first normalized document and a template graphical portion overlap by more than a predetermined overlap percentage (para. [0026]). That is to say, when the spatial correlation between a template data field (fiducial marking) and a data element in the document image is more than a predetermined overlap percentage, the data element matches the data field and the data value is extracted); and
generating, by the at least one processor of the digital asset generation platform, a digital asset representative of the one or more documents based on one or more key value data pairs extracted from the dynamic mapping between each fiducial marking of the one or more fiducial markings and the respective data element of the one or more data elements (FIG. 3B #345; para. [0102] “In block 345, the data extraction system may generate a structured data set based at least partly on the extracted data. For example, the data extraction system may generate a table where each row of the table represents a separate document and each column in the table represents a data field in the document. The system may generate other types of data sets (e.g., an ordered list)”; The generated table or list can be regarded as a digital asset; As shown in FIG. 6, a table is generated for organizing extracted field and associated value. The field, such as “Doc Name”, “Account Number” etc., and associated value, constitutes a key value data pair. Fig. 6 includes a plurality of key value data pairs).
As per claim 2, dependent upon claim 1, Patton further teaches:
selecting, by the at least one processor of the digital asset generation platform, from one or more mapping templates, a mapping template for the at least one data object model (para. [0016] “accessing a first document of a first document type; determining a template associated with the first document type, the template including a plurality of graphical portions associated with input fields”; para. [0017] “receiving a user selection defining one or more template input fields and one or more template graphical portions associated with the one or more input fields; and generating one or more templates based at least partly on the user selection, wherein each template is associated with a document type”; para. [0098]); and
identifying, by the at least one processor of the digital asset generation platform, one or more training values of the mapping template (FIG. 4; para. [0043] “A template may define properties specific to the one or more types of documents, such as the number of data input fields and the location of each data input field in the one or more types of documents”; The input data field and location of each input data field is considered as training values); and
generating, by the at least one processor of the digital asset generation platform, the digital asset comprising the one or more key value data pairs by comparing the one or more training values with the one or more key value data pairs (FIG. 4 showing a template comprising key value pairs; FIG. 6 showing extracted key value pairs; para. [0100] “The data extraction system may detect input fields in the received document by comparing the x-y coordinates of a user-defined template data field to information presented in the same x-y coordinate in the received document”; para. [0026] “Data values may be extracted if a graphical portion associated with the first normalized document and a template graphical portion overlap by more than a predetermined overlap percentage”).
As per claim 3, dependent upon claim 1, Patton further teaches:
extracting, by the at least one processor of the digital asset generation platform, one or more digital objects from the dynamic mapping between the respective fiducial marking and the respective data element of the one or more data elements (FIG. 4 text object; FIG. 6); and
generating, by the at least one processor of the digital asset generation platform, the digital asset representative of the one or more documents based on the one or more key value data pairs and the one or more digital objects (FIG. 4 shows template for extracting text objects; FIG. 6 shows a table (digital asset) generated from the extracted text objects comprising a plurality of key value pairs; para. [0101]-[0102]).
As per claim 4, dependent upon claim 1, Patton further teaches:
extracting, by the at least one processor of the digital asset generation platform, one or more text objects from the dynamic mapping (FIG. 4; FIG. 6); and
extracting, by the at least one processor of the digital asset generation platform, the one or more key value data pairs from the dynamic mapping based on the one or more text objects (FIG. 6 shows a table (digital asset) generated from the extracted text objects comprising a plurality of key value pairs; para. [0101]-[0102]).
As per claim 5, dependent upon claim 1, Patton further teaches:
extracting, by the at least one processor of the digital asset generation platform, one or more text objects from the at least one digital file based at least in part on the at least one data object model (FIG. 4; FIG. 6); and
extracting, by the at least one processor of the digital asset generation platform, the one or more key value data pairs from the dynamic mapping based on the one or more text objects (FIG. 3B #315, 325, 335; FIG. 6; para. [0016] “accessing a first document of a first document type; determining a template associated with the first document type, the template including a plurality of graphical portions associated with input field”; para. [0043]).
As per claim 6, dependent upon claim 1, Patton further teaches:
generating, by the at least one processor of the digital asset generation platform, based on the one or more key value data pairs, a multi-cell matrix with at least one header cell (FIG. 6 is a table comprising extracted key value pairs in a multi-cell matrix format. Among the cells, at least one cell can be served as a header cell. For example, the bank name data item may be used to identify the indicated bank and to create a link between the document object and an entity object of the bank (para. [0103])); and
generating, by the at least one processor of the digital asset generation platform, the digital asset comprising the one or more key value data pairs in the multi-cell matrix (FIG. 6).
As per claim 9, Patton teaches at least one non-transient computer-readable storage medium (FIG. 7; para. [0005]) encoded with computer-executable instructions that, when executed by at least one processor (FIG. 7; para. [0005]), cause the at least one processor to perform:
identifying at least one data object model of one or more data object models for at least one digital file in at least one digital format for a digital representation of one or more documents (See rejections applied in claim 1 for similar limitations);
detecting, for each respective fiducial marking of one or more fiducial markings overlaid on one or more data elements in the at least one digital file based at least in part on the at least one data object model and associated with a respective spatial correlation meeting or exceeding a loss metric criterion, a dynamic mapping between the respective fiducial marking and a respective data element of the one or more data elements (See rejections applied in claim 1 for similar limitations); and
generating a digital asset representative of the one or more documents based on one or more key value data pairs extracted from the dynamic mapping between each fiducial marking of the one or more fiducial markings and the respective data element of the one or more data elements (See rejections applied in claim 1 for similar limitations).
Claim 10, dependent upon claim 9, is rejected as applied to claim 2 above.
Claim 11, dependent upon claim 9, is rejected as applied to claim 3 above.
Claim 12, dependent upon claim 9, is rejected as applied to claim 4 above.
Claim 13, dependent upon claim 9, is rejected as applied to claim 5 above.
Claim 14, dependent upon claim 9, is rejected as applied to claim 6 above.
As per claim 16, Patton teaches a system (Abstract; FIG. 7) comprising:
a non-transient computer memory (FIG. 7), storing software instructions (para. [0107]); and
at least one processor (FIG. 7) of a computing device associated with a user (FIG. 4);
wherein when the at least one processor executes the software instructions, the computing device is programmed (para. [0005]) to:
identify at least one data object model of one or more data object models for at least one digital file in at least one digital format for a digital representation of one or more documents (See rejections applied in claim 1 for similar limitations);
detect, for each respective fiducial marking of one or more fiducial markings overlaid on one or more data elements in the at least one digital file based at least in part on the at least one data object model and associated with a respective spatial correlation meeting or exceeding a loss metric criterion, a dynamic mapping between the respective fiducial marking and a respective data element of the one or more data elements (See rejections applied in claim 1 for similar limitations); and
extract one or more key value data pairs from the dynamic mapping between the respective fiducial marking and the respective data element of the one or more data elements (See rejections applied in claim 1 for similar limitations).
Claim 17, dependent upon claim 16, is rejected as applied to claim 2 above.
Claim 18, dependent upon claim 16, is rejected as applied to claim 3 above.
Claim 19, dependent upon claim 16, is rejected as applied to claim 4 above.
Claim 20, dependent upon claim 16, is rejected as applied to claim 5 above.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Patton et al. (US 20210089712 A1, hereafter Patton), as applied above to claim 1, in view of Agarwal et al. (US 20240005640 A1, hereafter Agarwal).
As per claim 7, Patton does not teach the recited limitations.
Agarwal in an analogous field discloses a synthetic document generation pipeline for training artificial intelligence models (Abstract). Specifically, Agarwal discloses an entity relationship mapping engine (para. [0061] “entity linking (EL)”), a natural language processing engine (para. [0085] “ transformer-based language models (e.g., Bidirectional Encoder Representation from Transformers “BERT”) (Note BERT is a LLM); and a legal entity recognition engine (para. [0061] “name entity recognition (NER)”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Patton to incorporate the teaching of Agarwal to include an entity relationship mapping engine, a natural language processing engine, and a legal entity recognition engine. Including these engines would configurate a document to be generated as recognized by Agarwal (para. [0061]).
Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Patton et al. (US 20210089712 A1, hereafter Patton), as applied above to claim 1, in view of Song et al. (US 20230135880 A1, hereafter Song).
As per claim 8, Patton does not teach the recited limitations.
Song in the same field of endeavor discloses a text recognition method, and a text recognition post-processing method for reflecting user post-correction (Abstract). Specifically, Song teaches when there is a character misrecognition with respect to the key-value relationship processing result, the user performs post-correction (FIG. 3; para. [0055]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Patton to incorporate the teaching of Song to generate the digital asset based on feedback inputs for the digital asset. Doing so would reflect user post-correction feedback as mentioned by Song (para. [0006]).
Claim(s) 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Patton et al. (US 20210089712 A1, hereafter Patton), as applied above to claim 1, in view of DeLuca et al. (US 20220198390 A1, hereafter DeLuca).
As per claim 15, Patton teaches at least one header cell and one and more key value data pairs. Patton, however, does not teach identifying whether the at least one header cell is associated with the one or more key value data pairs.
DeLuca in an analogous field discloses a method for querying a digital twin marketplace for digital twins relevant to a work order and delivering said digital twins to a mobile computing device (Abstract). DeLuca teaches after receiving a work order, keys and values are extracted. Specifically, using a work order with specified fields defined by headers and corresponding values, those values are automatically extracted along with the field headers to extract the key-value pairs (para. [0037]). The key-value pair extraction process is a process for identifying if the header is associated with at least one key-value pair. If there is association, associated key-value pairs will be extracted, and vice versa.
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teaching of Patton to incorporate the teaching of DeLuca to identify if at least one header cell is associated with the one or more key value data pairs. Doing so would make associated key-value pairs to be searchable and retrievable as suggested by DeLuca (para. [0037]).
Contact
Any inquiry concerning this communication or earlier communications from the examiner should be directed to XUEMEI G CHEN whose telephone number is (571)270-3480. The examiner can normally be reached Monday-Friday 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John M Villecco can be reached on (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/XUEMEI G CHEN/Primary Examiner, Art Unit 2661