Prosecution Insights
Last updated: April 19, 2026
Application No. 17/813,163

SYSTEMS AND METHODS FOR INTELLIGENT DOCUMENT VERIFICATION

Final Rejection §103
Filed
Jul 18, 2022
Examiner
BLACK, LINH
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
DELL PRODUCTS, L.P.
OA Round
4 (Final)
51%
Grant Probability
Moderate
5-6
OA Rounds
5y 1m
To Grant
62%
With Interview

Examiner Intelligence

Grants 51% of resolved cases
51%
Career Allow Rate
222 granted / 437 resolved
-4.2% vs TC avg
Moderate +12% lift
Without
With
+11.5%
Interview Lift
resolved cases with interview
Typical timeline
5y 1m
Avg Prosecution
40 currently pending
Career history
477
Total Applications
across all art units

Statute-Specific Performance

§101
12.3%
-27.7% vs TC avg
§103
64.0%
+24.0% vs TC avg
§102
16.5%
-23.5% vs TC avg
§112
3.3%
-36.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 437 resolved cases

Office Action

§103
DETAILED ACTION This communication is in response to the Arguments/Remarks dated 7/11/2025. Claims 1-4, 6-13, 15-21 are pending in the application. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1-4, 6-13, 15-21 have been considered. Regarding the 101 arguments, examiner finds that the amended limitations in the independent claims 1, 10 and 19: “generating, by the document verification service, one or more templates for extracting target data, the one or more templates generated by one or more classification models according to one or more historical document sources and one or more historical document types, wherein the one or more classification models are trained on a corpus of the historical documents” require generating template(s) action by classification model(s) that cannot practically applied in the mind. Thus, the claims do not recite abstract ideas. Regarding the arguments in relating to the amended limitations in the independent claims 1, 10 and 19, please see the new combination of references with columns and lines cited below. The teachings of Kobashi et al. have no longer been applied in this Office action. It is noted, REFERENCES ARE RELEVANT AS PRIOR ART FOR ALL THEY CONTAIN. "The use of patents as references is not limited to what the patentees describe as their own inventions or to the problems with which they are concerned. They are part of the literature of the art, relevant for all they contain." In re Heck, 699 F.2d 1331, 1332-33,216 USPQ 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 USPQ 275,277 (CCPA 1968)). A reference may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art, including non-preferred embodiments (see MPEP 2123). The Examiner has cited particular locations in the reference(s) as applied to the claims above for the convenience of the Applicants. Although the specified citations are representative of the teachings of the art and are applied to the specific limitations within the individual claims, typically other passages and figures will apply as well. Rezvani et al. teaches generating, by the document verification service, one or more templates for extracting target data, the one or more templates generated by one or more classification models according to one or more historical document sources and one or more historical document types at para. 16: generate electronic versions of the form documents for easy access and sharing among users; para. 24: the feature extraction engine uses the extracted text and coordinates of the template form document to generate an electronic version of the template form document; para. 90: creating a template for optical character recognition (OCR) for each document type. Based on the training, the device may create a template OCR for each document type. OCR may involve the electronic conversion of images of typed, handwritten or printed text into machine-encoded text. For instance, each template OCR may be applied to subsequently extract information from documents similar to the sample documents used to train the classifiers. Thus, the classifiers may help organize documents for subsequent information extraction using the template OCRs. Please see the new combination of references below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-4, 6-13, 15-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chhichhia et al. (US 20160147891) in view of Jean et al. (US 20160148074) and further in view of Rezvani et al. (US 20210064866). Specification, para. 29: In cases where an application or data repository does not provide an interface or API, other means, such as printing and/or imaging, may be utilized to collect information therefrom (e.g., generate an image of printed historical documents). Optical character recognition (OCR) technology can then be used to convert the image of the content to textual data; para. 61 discloses “the document verification module can utilize a document type classification engine (e.g., document type classification engine 114 of Fig. 1B) to determine a document type of the document”. As per claims 1, 10, 19, Chhichhia et al. teaches A method comprising: providing by a document verification service, a plurality of application interfaces to a plurality of document data sources in an organization (para. 63: extracted text characters may be assembled into words based on semantics. For example, the string of "Berriesaregood" may be input to a semantic analysis tool, which matches the string to dictionary entries or Internet search terms, and outputs the longest match found within the string. The outcome of this process is a semantically meaningful string of "Berries are good"; fig. 1: content is managed by staging, validation, etc.; para. 37: the document may have a plurality of pages organized into chapters, which could be further divided into one or more sub-chapters. Each page may have text, images, tables, graphs, or other items distributed across the page; para. 111: the classification module generates a user interface for display to the evaluators that provides a validation task for each of a subset of documents. Example validation tasks include approving or rejecting the labels assigned by the learned model or rating the labels on a scale); periodically collecting by the verification service, historical documents through the plurality of application interfaces (fig. 1: data collection analysis, content rendering data collection interface; para. 36: Content block 101 automatically gathers and aggregates content from a large number of sources, categories, and partners. These systems define the interfaces and processes to automatically collect various content sources into a formalized staging environment; para. 44: theeCommerce system interfaces with back office system, publishing, and distribution to integrate marketing, selling, servicing, and receiving payment for digital products and services; para. 160); wherein the one or more classification models are trained on a corpus of the historical documents; receiving, by the document verification service, a request to verify a document from a client device (fig. 5: content classification system, model trainer, learned model, classification module; para. 37-38: the ingestion module 120, including staging, validation, and normalization subsystems, ingests published documents; para. 79: the content classification system trains a model for assigning taxonomic labels to a representative content entity, which is a content entity determined to have a high degree of similarity to the other content entities of the catalog database; fig. 2: existing document; para. 86: the model trainer extracts features from the training documents and uses the features to train a model for assigning taxonomic labels to an arbitrary document. A process performed by the model trainer for generating the learned model; fig. 1: content rendering data collection interface; para. 42-44, 111: the classification module generates a user interface for display to the evaluators that provides a validation task for each of a subset of documents. Example validation tasks include approving or rejecting the labels assigned by the learned model or rating the labels on a scale); extracting, by the document verification service, text from the document; lemmatizing the text extracted from the document using a natural language process (para. 89, 91: the feature extraction module processes each sample by a standardized data processing scheme to clean and transform the documents. In particular, the feature extraction module extracts metadata from the documents, including title, author, publication date, description, keywords, and the like. The metadata and/or the text of the documents are analyzed by one or more semantic techniques, such as term frequency-inverse document frequency normalization, part-of-speech tagging, lemmatization, latent semantic analysis, or principal component analysis, to generate feature vectors of each sample. As the documents in each content entity may be in a variety of different formats, one embodiment of the feature extraction module includes a pipeline dedicated to each document format for extracting metadata and performing semantic analysis; para. 76, 105-107: the model trainer outputs the learned model to the classification module for classifying documents of the other content entities; para. 62-63: a string of “Berriesaregood” represents extracted characters without considering spacing information. Once taking the spacing into consideration, the same string becomes “Berries are good”, the same text is analyzed by both spacing and semantics, so that word grouping results may be verified and enhanced); generating from the lemmatized text, by the document verification service, a reference text combination of the document (para. 49-50: the elements referenced within each classes become identified by their respective content layer. Specifically, all the related content page-based elements that are matched with a particular reconstructed document are classified as part of the related content layer. Similarly, all other document enhancement processes, including user generated, advertising and social among others, are classified by their specific content layer. The outcome of Phase 2 is a series of static and dynamic page-based content layers that are logically stacked on top of each other and which collectively enhance the reconstructed foundation document; para. 111: generates a user interface for display to the evaluators that provides a validation task for each of a subset of documents; para. 141: for the tokens not discarded during the filtering, the topic extraction module lemmatizes each token, converting plural nouns to singular and conjugated verbs to their root forms); determining, by the document verification service using a first classification model of the one or more classification models including a multiclass support vector machine, a type of the document based on the reference text combination of the document (figs. 10-11: generating a learned model for assigning taxonomic labels to a representative content entity, assigning taxonomic labels to documents using the learned model; para. 37, 49: by having each of these processes focusing on specific classes of content and databases, the elements referenced within each classes become identified by their respective content layer; fig. 5: content classification system, classification module; para. 78-79: the content classification system trains a model for assigning taxonomic labels to a representative content entity, which is a content entity determined to have a high degree of similarity to the other content entities of the catalog database; para. 86-87, 116: by having each of these processes focusing on specific classes of content and databases, the elements referenced within each classes become identified by their respective content layer; para. 156, 161: the usage of Support Vector Machines); Chhichhia et al. does not explicitly teach generating, by the document verification service, one or more templates for extracting target data, the one or more templates generated by one or more classification models according to one or more historical document sources and one or more historical document types; the SVM. Jean teaches generating, by the document verification service, one or more templates for extracting target data, the one or more templates generated by one or more classification models according to one or more historical document sources and one or more historical document types (fig. 18: fig. 30: template discovery; para. 59, 134: an RLM algorithm for form type classification. In the first stage, query image undergoes feature extraction via extraction. This process generates the feature and Bo VW vectors for subsequent classification. In retrieval, vectors are classified and a list of candidate templates, such as from a library or database of templates, is generated for matching. Matching registers all candidate templates against the query instance and selects the candidate that achieves the best alignment score; para. 142: a task of each classifier (e.g., SIFT classifier, ORB classifier, and WORD classifier) is to assign a template class label to each element in their corresponding set of feature vectors; para. 172, 145: analyze the proper/optimal heuristics for training the classifier models; para. 152: we split the dataset into a training set and a validation set. 80% of the data is chosen at random for training and the remaining 20% is reserved for testing as part of the validation set; para. 156: the classifier models thus far have been initialized with a preliminary set of user provided templates; para. 128-129: verify that the degree of visual similarity in the topmost candidate templates depend on the level of agreement between the original classifiers); determining, by the document verification service using a first classification model of the one or more classification models including a multiclass support vector machine, a type of the document based on the reference text combination of the document (para. 41-44: to extract the data from the form image, the form is recognized from a library of forms, and the data is extracted based on optical character recognition (OCR) and knowledge of the form extract data from more than digital images of forms, such as objects that have a particular template and where instances of the objects change within the framework of the particular template; introduces technology that includes a supervised framework for visually classifying paper forms, referred to as a retrieval, learning, and matching (RLM) algorithm; para. 47-48, 121: from a statistical point of view, the way humans solve the problem was not as optimal as the SVM classifiers used in the study. Models can cooperate to improve the overall reliability of retrieval; para. 129: verify that the degree of visual similarity in the topmost candidate templates depends on the level of agreement between the original classifiers; para. 154: usage of SVM in classification techniques; para. 160: from a statistical point of view, the way humans solve the problem was not as optimal as the SVM classifiers used in the study); determining, by the document verification service using a second classification model of the one or more classification models, a source of the document based on the reference text combination of the document (para. 109, 113: use the k nearest-neighbor (kNN) algorithm to train and predict the form class of a vector; para. 71: we base our registration algorithm on the SURF feature detector. The algorithm begins by extracting and encoding keypoints in both template and query image. Extracted features are then matched across the two images using the nearest neighbor strategy; para. 129: verify that the degree of visual similarity in the topmost candidate templates depends on the level of agreement between the original classifiers; para. 154: usage of SVM in classification techniques); determining, by the document verification service, a template from the one or more generated templates for the type of the document and the source of the document, the template indicating positioning of target data in documents of the type and source as the document (para. 57: a RLM classification framework for form type detection; para. 66: Using a nearest neighbor based strategy, putative matches can be found between pairs of keypoints. We use the term putative to indicate that keypoints could have multiple matches due to having very similar or identical descriptors that could be used for multiple keypoints of the same image; para. 84-85: RLM was developed to improve the computational efficiency of template type detection, first retrieving a list of visually similar document images and providing the best h templates for alignment, where his significantly less than the total number of M possible templates; para. 129: verify that the degree of visual similarity in the topmost candidate templates depends on the level of agreement between the original classifiers; para. 154: usage of SVM in classification techniques); extracting, by the document verification service, one or more target data from the document using the template; and verifying, by the document verification service, the one or more target data extracted from the document (para. 42: extracting data from digital images, for example, extracting document data such as the name of a patient, a diagnosis of a patient, etc./portion of the document, from a digital image of a document, to extract the data from the form image, the form is recognized from a library of forms, and the data is extracted based on optical character recognition (OCR) and knowledge of the form; para. 67-69, 57: estimate the retrieval error, and update the algorithm to avoid the same future mistakes/verification; para. 65: returning to feature-based registration/valid data, once features have been respectively extracted from a template and query image/target data, a matching mechanism can find correspondences between key-points across two images based on the similarity of their descriptors; para. 129: verify that the degree of visual similarity in the topmost candidate templates depends on the level of agreement between the original classifiers). generating a verification of the one or more target data for transmission to a requesting user (para. 65: returning to feature-based registration/valid data, once features have been respectively extracted from a template and query image/target data, a matching mechanism can find correspondences between key-points across two images based on the similarity of their descriptors; para. 129: verify that the degree of visual similarity in the topmost candidate templates depends on the level of agreement between the original classifiers; para. 134: an RLM algorithm for form type classification. In the first stage, query image undergoes feature extraction via extraction. This process generates the feature and Bo VW vectors for subsequent classification. In retrieval, vectors are classified and a list of candidate templates, such as from a library or database of templates, is generated for matching. Matching registers all candidate templates against the query instance and selects the candidate that achieves the best alignment score; para. 142: identifies the template class from the final list of candidate templates that achieves the best alignment score for the query form instance). Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chhichhia et al. and the SVM teaching of Jean in order to effectively verify and classify types of documents thus, improve time performance. Even if Chhichhia et al. and Jean do not explicitly teach wherein the one or more classification models are trained on a corpus of the historical documents; receiving, by the document verification service, a request to verify a document from a client device, Rezvani et al. teaches said limitations at para. 23: upon reception of one or more documents and a request from the user to perform the classification task, the device may classify the document(s) using the trained classification model such that the documents are organized and routed to particular destinations as defined in the classifiers used by the model. Particularly, the device may route the documents into different folders and/or other destinations (e.g., email addresses or printers for printing) according to the mappings provided by the classifiers used by the classification model; para. 27-29: a classification service may use one or more classification models, which may each include a number of classifiers. One or more of these classification models may depend on a ratio of the number of sample/historical documents to the number of words within each sample document during training; use a classification service designed to classify phone bill invoices for an organization. For instance, the printing device may enable a user to create and train the classification service via an interface of the printing device and/or the printing device may obtain the classification service from another computing device. As such, the classification model of the service may use a different classifier for each network vendor to classify the phone bill invoices according to network vendors. Rezvani et al. also teaches generating, by the document verification service, one or more templates for extracting target data, the one or more templates generated by one or more classification models according to one or more historical document sources and one or more historical document types at para. 16: generate electronic versions of the form documents for easy access and sharing among users; para. 24: the feature extraction engine uses the extracted text and coordinates of the template form document to generate an electronic version of the template form document; para. 90: creating a template for optical character recognition (OCR) for each document type. Based on the training, the device may create a template OCR for each document type. OCR may involve the electronic conversion of images of typed, handwritten or printed text into machine-encoded text. For instance, each template OCR may be applied to subsequently extract information from documents similar to the sample documents used to train the classifiers. Thus, the classifiers may help organize documents for subsequent information extraction using the template OCRs. Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chhichhia et al., Jean and the one or more classification models are trained on a corpus of the historical documents; receiving, by the document verification service, a request to verify a document from a client device of Rezvani in order to effectively verify and classify types of documents thus, improve verification performance. As per claims 2, 11, Chhichhia et al. teaches wherein the reference text combination is in a portion of the document (para. 73: the content catalog database 402 includes a number of content entities, such as textbooks, courses, jobs, and videos. The content entities each include a set of documents of a similar type. For example, a textbooks content entity is a set of electronic textbooks or portions of textbooks. A courses content entity is a set of documents describing courses, such as course syllabi. A jobs content entity is a set of documents relating to jobs or job openings, such as descriptions of job openings; para. 137, 150: as the locations of the topics in the documents are known, the portions of the documents including the selected topic may be displayed to the user. For example, identifiers of the sections of the documents including a topic are provided to the user in response to the user's selection of a topic); Jean et al. also teaches wherein the reference text combination is in a portion of the document (para. 42, 118: In CBIR/content based image retrieval, the final vector can take into account visual words occurring throughout the entire image. In some embodiments, to obtain multiple BoVWs/bag of visual words for a single image, we partition the image into 9 separate regions, as shown at 1400 of FIG. 14). As per claims 3, 12, Chhichhia et al. teaches wherein the reference text combination is determined from text extracted from a portion of the document (para. 151: by interacting with the topic graph, the user can browse subjects or topics of interest and identify documents describing the topics). Jean et al. also teaches wherein the reference text combination is determined from text extracted from a portion of the document (para. 42: extracting data from digital images, for example, extracting document data such as the name of a patient, a diagnosis of a patient, etc./portion of the document, from a digital image of a document, to extract the data from the form image, the form is recognized from a library of forms, and the data is extracted based on optical character recognition (OCR) and knowledge of the form). As per claims 4, 13, Chhichhia et al. teaches wherein verifying the one or more target data includes comparing one of the one or more target data to valid data (para. 63: by interacting with the topic graph, the user can browse subjects or topics of interest and identify documents describing the topics). Jean et al. also teaches wherein verifying the one or more target data includes comparing one of the one or more target data to valid data (para. 56: images in a collection of form templates are first converted into a specific statistical representation and stored in memory/historical documents of the type of templates. When a new form instance/reference text combination is submitted, the system can use the numerical structure to retrieve similar images; para. 65: returning to feature-based registration/valid data, once features have been respectively extracted from a template and query image/target data, a matching mechanism can find correspondences between key points across two images based on the similarity of their descriptors; para. 129: verify that the degree of visual similarity in the topmost candidate templates depends on the level of agreement between the original classifiers). As per claims 6, 15, Chhichhia et al. teaches wherein the first classification model is trained using supervised learning with reference text combinations extracted from a portion of each document of a plurality of historical documents (fig. 5: taxonomic labels, model trainer, learned model, classification module; fig. 11: augment learner training data with high confidence judgments, retrain learner/supervised learning; para. 87: a hierarchical taxonomy for educational content includes categories and subjects within each category. For example, art, engineering, history, and philosophy are categories in the educational hierarchical taxonomy, and mechanical engineering, biomedical engineering, computer science, and electrical engineering are subjects within the engineering category. The taxonomic labels 547 may include any number of hierarchical levels. The classification module 550 assigns one or more taxonomic labels to each document of the other content entities using the learned model 545, and classifies the documents based on the applied labels. A process performed by the classification module for assigning taxonomic labels to documents using the learned model 545 is described with respect to FIG. 11). Jean et al. also teaches wherein the first classification model is trained using supervised learning with reference text combinations extracted from a portion of each document of a plurality of historical documents (para. 44: a supervised framework for visually classifying paper forms, referred to as a retrieval, learning, and matching (RLM) algorithm. This disclosure takes a phased approach to methodically construct and combine statistical and predictive models for identifying form classes in unseen images. Some embodiments apply to a supervised setting, in which example templates, which are templates based on example/reference forms, are used to define form classes prior to classifying new instances of documents). As per claims 7, 16, Chhichhia et al. teaches wherein the second classification model includes a k-nearest neighbor (k-NN) classifier (para. 95: the learner implemented by the entity relationship analysis module is an instance-based learner and each document is a training example for the document's content entity. For example, the entity relationship analysis module generates a k-nearest neighbor classifier based on the entity labels of the samples); Jean et al. also teaches wherein the second classification model includes a k-nearest neighbor (k-NN) classifier (para. 113: use the k nearest-neighbor (kNN) algorithm to train and predict the form class of a vector). As per claims 8, 17, Chhichhia et al. teaches wherein the k-NN classifier determines the source of the document based on distance measures of the reference text combination of the document and reference text combinations of a plurality of historical documents of the type as the document (para. 96: if the learner is a k-nearest neighbor classifier, the learner assigns each set aside sample an entity label based on the similarity between features of the sample and features of the other content entities/the source; para. 171: the visualization provides one or more paths through the topics originating at a source topic. The paths may include paths to one or more topics related to the source topic; fig. 13: taxonomy; fig. 16: labeled documents, extract topics, score affinity of topics to nodes of taxonomy, scored topics, generate topic graph). Jean et al. also teaches wherein the k-NN classifier determines the source of the document based on distance measures of the reference text combination of the document and reference text combinations of a plurality of historical documents of the type as the document (para. 56: images in a collection of form templates are first converted into a specific statistical representation and stored in memory/historical documents of the type of templates. When a new form instance/reference text combination is submitted, the system can use the numerical structure to retrieve similar images; para. 113: use the k nearest-neighbor (kNN) algorithm to train and predict the form class of a vector). As per claims 9, 18, 20, Chhichhia et al. do not teach said claims. Jean et al. teaches wherein the template is generated utilizing the first classification model and the second classification model (para. 43-44: extract data from digital images of objects, such as objects that have a particular template and where instances of the objects change within the framework of the particular template. Examples of such objects include, for example, a license of a motor vehicle, an official government document, etc. Apply to a supervised setting, in which example templates, which are templates based on example/reference forms, are used to define form classes prior to classifying new instances of documents; para. 53: training consists of creating a template form library by storing training images and their associated descriptors and/or class types. The training images can include form templates; para. 121: combine the predictions of all classifiers we have previously evaluated. Our goal is to improve the overall recall and robustness of retrieval. Models can cooperate to improve the overall reliability of retrieval). Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chhichhia et al. and the classification models of Jean in order to effectively classify types of documents. Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Chhichhia et al. (US 20160147891) in view of Jean et al. (US 20160148074) and further in view of Rezvani et al. (US 20210064866) and Balin (US 20200272814). As per claim 21, Chhichhia, Jean, Rezvani et al. do not teach limitations of said claim. Balin teaches calibrating the positioning of the target data in the template to accommodate a variation in the location of the target data in the document, wherein the variation is generated from the corpus of historical documents (para. 3, 16-17: data extracted from an image of a form document may be stored in an electronic database, and may also be used to generate electronic versions of the form documents for easy access and sharing among users. Form documents and/or images of form documents may be provided via scan, fax, email, instant message, text/multimedia message, or other electronic conveyance by a user through a user interface or API of the image processing engine. Variation may be a result of manual scanning or faxing, variations in equipment used to provide images of form documents to the image processing engine, user error, and the like. To account for this variation, and to automatically extract data from the form documents, the image processing engine performs a calibration on a template form document, and modifies completed form documents of the same form document type until the coordinates or locations of the text in the completed form documents matches or is similar to those in the template form document; para. 24). Thus, it would have been obvious to one or ordinary skill in the art before the effective filing date of the claimed invention to combine the teachings of Chhichhia, Jean, Rezvani et al. and the variation is generated from the corpus of historical documents of Balin in order to effectively merge document contents or recognize variations/versions of the document templates. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Shay et al. (US 20040158587) teaches at para. 35: the image version is stored for archival purposes. The OCR software is calibrated to recognize particular fields within common patent office forms to capture data from those forms so that appropriate data (e.g., due dates, Examiner's name, Applicant, application no., etc.) from such papers can be parsed and entered into database 106. To this end, the fields of various patent office forms that are scanned by mailroom 108 are mapped to database 106 along with the document type (determined from the form recognition sequence) in order to enable the system to determine the appropriate docketing deadlines. Macciola et al. (US 8855375) teaches at col. 37, last paragraph: based on the determined content, the digital representation of the document may be determined to correspond to one or more known document types, and utilizing information about the known document type(s). Martens et al. (US 20140229164) teaches at para. 81: the support vector machine (SVM) technique, generating models as shown in Eq. (3), can be shown often to perform quite well for document classification, as it can employ regularization to control the complexity of the model. Dorai (US 20230012801) discloses at para. 69-71: classification models. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINH BLACK whose telephone number is (571)272-4106. The examiner can normally be reached 9AM-5PM EST M-F. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LINH BLACK/Examiner, Art Unit 2163 10/13/2025 /TONY MAHMOUDI/Supervisory Patent Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Jul 18, 2022
Application Filed
Mar 16, 2024
Non-Final Rejection — §103
Jun 20, 2024
Response Filed
Oct 10, 2024
Final Rejection — §103
Dec 17, 2024
Response after Non-Final Action
Jan 16, 2025
Request for Continued Examination
Jan 23, 2025
Response after Non-Final Action
Apr 05, 2025
Non-Final Rejection — §103
Jun 10, 2025
Interview Requested
Jul 11, 2025
Response Filed
Oct 29, 2025
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602376
SYSTEMS AND METHODS FOR DATA CURATION IN A DOCUMENT PROCESSING SYSTEM
2y 5m to grant Granted Apr 14, 2026
Patent 12530339
DISTRIBUTED PLATFORM FOR COMPUTATION AND TRUSTED VALIDATION
2y 5m to grant Granted Jan 20, 2026
Patent 12468835
SYSTEM AND METHOD FOR SESSION-AWARE DATASTORE FOR THE EDGE
2y 5m to grant Granted Nov 11, 2025
Patent 12461923
SUITABILITY METRICS BASED ON ENVIRONMENTAL SENSOR DATA
2y 5m to grant Granted Nov 04, 2025
Patent 12450239
METHODS AND APPARATUS FOR IMPROVING SEARCH RETRIEVAL
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
51%
Grant Probability
62%
With Interview (+11.5%)
5y 1m
Median Time to Grant
High
PTA Risk
Based on 437 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month