Prosecution Insights
Last updated: April 19, 2026
Application No. 18/486,916

Systems and Methods For Structured Bayesian Classification For Content Management

Final Rejection §101§103
Filed
Oct 13, 2023
Examiner
BHUYAN, MOHAMMAD SOLAIMAN
Art Unit
2168
Tech Center
2100 — Computer Architecture & Software
Assignee
Micro Focus LLC
OA Round
4 (Final)
84%
Grant Probability
Favorable
5-6
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
137 granted / 164 resolved
+28.5% vs TC avg
Strong +23% interview lift
Without
With
+22.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
17 currently pending
Career history
181
Total Applications
across all art units

Statute-Specific Performance

§101
16.5%
-23.5% vs TC avg
§103
44.2%
+4.2% vs TC avg
§102
15.8%
-24.2% vs TC avg
§112
15.5%
-24.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 164 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant's response filed 30 October 2025 has been considered and entered. Accordingly, claims 1-5 and 8-22 are pending in this application. Claims 1, 13 and 18 are currently amended; claims 2-3, 10-12, 14-15, 17 and 19-22 are previously presented; claims 4-5, 8-9 and 16 are original; and claims 6-7 are cancelled. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. 3. Claims 1-5 and 8-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claim 1 recites a series of steps and therefore, is a process. As such, claim 1 is one of the statutory categories. The claim recites “analyze the metadata in the structured format to generate features enhancing the metadata in the structured format, wherein the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value, and the combination of the field and the value for accurately defining the document; produce classification data for the document based on the features enhancing the metadata in the structured format and the content in the unstructured format” which is merely a concept can be performed in the human mind. These limitations, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components. If a claim limitation under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. The claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. The claim 1 recites the additional element: “receive a document including at least one of structured data, semi-structured data, and unstructured data, wherein the structured data includes metadata about the document in a structured format, the semi-structured data includes content of the document in an unstructured format and metadata about the document in the structured format and the unstructured data includes the content of the document in the unstructured format; wherein the content in the unstructured format includes one of natural language data, speech data, and video data”, which indicates mere data gathering in conjunction with a law of nature or abstract idea that the courts have found to be insignificant extra-solution activity. See MPEP 2106.05(g). Use of a computer or other machinery in its ordinary capacity for economic or other tasks (e.g., to receive, store, or transmit data) or simply adding a general purpose computer or computer components after the fact to an abstract idea (e.g., a fundamental economic practice or mathematical equation) does not integrate a judicial exception into a practical application or provide significantly more. See Affinity Labs v. DirecTV, 838 F.3d 1253, 1262, 120 USPQ2d 1201, 1207 (Fed. Cir. 2016) (cellular telephone); TLI Communications LLC v. AV Auto, LLC, 823 F.3d 607, 613, 118 USPQ2d 1744, 1748 (Fed. Cir. 2016) (computer server and telephone unit). See MPEP 2106.05(f). The addition of insignificant extra-solution activity does not amount to an inventive concept. The claim 1 also recites additional elements, “receive a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents, wherein the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format, and wherein during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories; provide the classification data to the trained machine learning agent, wherein the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document; and automatically classify the document by associating the document with the predicted category and store a classification record in memory”. These limitations in reciting the steps, “receiving, training, learning and inputting”, provide nothing more than mere instructions to implement an abstract idea on a generic computer. See MPEP 2106.05(f). MPEP 2106.05(f) provides the following considerations for determining whether a claim simply recites a judicial exception with the words “apply it” (or an equivalent), such as mere instructions to implement an abstract idea on a computer: (1) whether the claim recites only the idea of a solution or outcome i.e., the claim fails to recite details of how a solution to a problem is accomplished; (2) whether the claim invokes computers or other machinery merely as a tool to perform an existing process; and (3) the particularity or generality of the application of the judicial exception. The trained machine learning agent is used to generally apply the abstract idea without placing any limits on how the trained machine learning agent functions. Rather, these limitations only recite the outcome of “generate one or more predicted categories for the document” and do not include any details about how the generating step is accomplished. The independent claims further include use of generic computer components including the system comprising a processor and a memory. However, the inclusion of generic computer components being used to implement the abstract idea do not amount to significantly more than the abstract idea. MPEP 2106.05(b). Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. See MPEP 2016.05(h). The claim 1 is directed to the abstract idea. As discussed with respect to Step 2A Prong Two, the additional elements in the claim amount to no more than mere instructions to apply the exception using a generic computer component. The same analysis applies here in 2B, i.e., mere instructions to apply an exception using a generic computer component cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Thus, the claim 1 is ineligible. As such, the independent claims 13 and 18 are also directed to the abstract idea for the same reason as the claim 1 above. Regarding claims 2, 14 and 19, claims further recite the limitations “wherein the content of the document in the unstructured format is used in determining the probability value”, which indicates mere data gathering activity that the courts have found to be insignificant extra-solution activity. See MPEP 2106.05(g). As such, the claims include data related to mental processes are insignificant extra solution activity which are well-understood, routine, and conventional. MPEP 2106.05(g). As such, the claims are directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claims include no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are ineligible. Regarding claims 3, 15 and 20, claims further recite the limitations “wherein the features enhancing the metadata are evaluated from the metadata based on a continuum of a numerical value of a field in the metadata instead of the numerical value of the field itself”, are just insignificant extra solution activity which are well-understood, routine, and conventional. MPEP 2106.05(g). As such, the claim is directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claim includes no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are ineligible. Regarding claims 4, 16 and 21, claims further recite the limitations “wherein the features enhancing the metadata are evaluated from the metadata based on a proximity of each of the features enhancing the metadata to other features enhancing the metadata”, are just insignificant extra solution activity which are well-understood, routine, and conventional. MPEP 2106.05(g). As such, the claim is directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claim includes no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are ineligible. Regarding claims 5, 17 and 22, claims further recite the limitations “wherein the features enhancing the metadata are evaluated from the metadata based on weights assigned to each of the features enhancing the metadata”, are just insignificant extra solution activity which are well-understood, routine, and conventional. MPEP 2106.05(g). As such, the claim is directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claim includes no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claims are ineligible. Regarding claim 8, claim further recites the limitations “wherein the fields include at least one of a location of the document, a type of document and an author of the document”, which indicates mere data gathering activity that the courts have found to be insignificant extra-solution activity. See MPEP 2106.05(g). As such, the claims include data related to mental processes are insignificant extra solution activity which are well-understood, routine, and conventional. MPEP 2106.05(g). As such, the claims are directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claims include no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claim is ineligible. Regarding claim 9, claim further recites the limitations “wherein the processor is further caused to assign a priority value to the features enhancing the metadata”, are just insignificant extra solution activity which are well-understood, routine, and conventional. MPEP 2106.05(g). As such, the claim is directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claim includes no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claim is ineligible. Regarding claim 10, claim further recites the limitations “wherein the processor is further caused to: compare one trained machine learning agent of a plurality of trained machine learning agents to one or more other trained machine learning agents of the plurality of trained machine learning agents to determine an overall mapping of the plurality of categories”, are insignificant extra solution activity which are well-understood, routine, and conventional. MPEP 2106.05(g). As such, the claim is directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claim includes no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claim is ineligible. Regarding claim 11, claim further recites the limitations “wherein the processor is further caused to: compare a new document to the plurality of trained machine learning agents; determine if the new document matches one or more of the categories represented by the plurality of trained machine learning agents; and if the new document does not match one or more of the categories represented by the plurality of trained machine learning agents, create a new trained machine learning agent for a new category represented by the new document”, are insignificant extra solution activity which are well-understood, routine, and conventional. MPEP 2106.05(g). As such, the claim is directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claim includes no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claim is ineligible. Regarding claim 12, claim further recites the limitations “wherein the processor is further caused to; compare one trained machine learning agent of the plurality of trained machine learning agents to a plurality of new documents; and determine which new documents to the plurality of new documents best match the category represented by the one trained machine learning agent”, are insignificant extra solution activity which are well-understood, routine, and conventional. MPEP 2106.05(g). As such, the claim is directed to perform mental processes that fall into the “Mental Processes” groupings of abstract ideas and are directed to a judicial exception. Furthermore, claim includes no additional elements that would integrate the judicial exception into a practical application or would amount to significantly more than the abstract idea. Thus, the claim is ineligible. Claim Rejections - 35 USC § 103 4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. 5. Claims 1-4, 8, 13-16, and 18-21 are rejected under 35 U.S.C. 103 as being unpatentable over Yanamandra et al. (previously presented) (US 2020/0394396 A1) hereinafter Yanamandra, in view of BROYLES et al. (previously presented) (US 2021/0406716 A1) hereinafter BROYLES, in view of Srinivasulu et al. (previously presented) (US 2021/0049138 A1) hereinafter Srinivasulu, and further in view of Swingler et al. (US 11,972,489 B1) hereinafter Swingler. As to claim 1, Yanamandra discloses a system, comprising: a processor; and a memory coupled with and readable by the processor and storing therein a set of instructions which, when executed by the processor (Para. 112), causes the processor to: receive a document including at least one of structured data, semi-structured data, and unstructured data (Para. 45, content classification system 200 receives a merged mortgage document 208, i.e., receive a document, that includes images of unstructured document pages (unstructured document page images 210, 214), images of structured document pages (structured document page images 212, 216), images of unstructured document pages (unstructured document page images 214). Para. 36, “a classification system can process a merged document to classify the structured documents, unstructured and semi-structured documents that make up the merged document.”.), wherein the structured data includes metadata about the document in a structured format, the semi-structured data includes content of the document in an unstructured format and metadata about the document in the structured format and the unstructured data includes the content of the document in the unstructured format (Fig. 14, Para. 110, “Capture application 1412 separates the set of pages into a first set of classified documents (e.g., a set of classified structured documents) and a set of unclassified document pages (e.g., a set of unclassified unstructured or semi-structured page images), extracts metadata from the classified documents and stores the classified documents with metadata and unclassified document pages to content management system 1420.”. Para. 113, “each document and page managed by content server 1432 is associated with respective document or page metadata. The metadata may include an object identifier associated with each item managed by the content server 1432. In particular, in order to manage content in the content management system (e.g., as stored in data store 1434), content server 1432 may utilize one or more object identifiers, such as GUIDs to identify objects. Such object identifiers may be used throughout classification system 1400 to identify individual classified documents 1435, unclassified pages 1436 and separated documents 1438.”. Thus, the structured data includes metadata about the document in a structured format, the semi-structured data includes content of the document in an unstructured format and metadata about the document in the structured format and the unstructured data includes the content of the document in the unstructured format.). Yanamandra does not explicitly disclose wherein the content in the unstructured format includes one of natural language data, speech data, and video data; receive a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents, wherein the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format, and wherein during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories; analyze the metadata in the structured format to generate features enhancing the metadata in the structured format, wherein the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value, and the combination of the field and the value for accurately defining the document; produce the classification data for the document based on the features enhancing the metadata in the structured format and the content in the unstructured format; provide the classification data to the trained machine learning agent, wherein the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document; and automatically classify the document by associating the document with the predicted category and store a classification record in memory. However, in the same field of endeavor, BROYLES discloses analyze the metadata in the structured format to generate features enhancing the metadata in the structured format (Para. 43, “To generate structured metadata 320, the classification layer 208 may use the structured form representation 220 to locate the pieces of metadata included in the reference document 310. The form components 224A, ..., 224N of the structured form representation 220 may be parsed to locate input fields (e.g., fields for inputting person information fields and wages) within the reference document 310. For example, the structured form representation 220 may be parsed to locate the fields within the W-2 form. The form components 224A, . . ., 224N of the structured form representation 220 may already be selected based on relevance to a particular context (e.g., preparing a text return). Therefore, the structured form representation 220 may be parsed to locate only the input fields containing metadata that is relevant to the particular context. The metadata input into the relevant fields in the reference document 310 are then extracted and assembled as structured metadata 320.”. Para. 65, “At step 608, the pieces of metadata are classified into metadata components using one or more machine learning models. For example, the piece of metadata input into a field may be classified as a "value" component, the description of the field may be classified as a "description" component. The machine learning models may also classify pieces of metadata into categories (i.e., personal information, financial data, medical data, and the like) based on the type of information included in the piece of metadata. At 610, the machine learning models may cluster the metadata components according to the category corresponding to each component to aggregate metadata components having the same category. For example, all metadata components classified as personal information (i.e., name, address, social security number, etc.) may be clustered together. Metadata components may also be clustered based on a similarity to a particular criterion and or proximity to a particular position within the reference document. Based on the clustering, metadata components may be assembled into a structured metadata representation. The structured metadata may be a schema format of the pieces of metadata.”. Thus, the metadata is being analyzed in the structured format to generate features enhancing the metadata in the structured format.); produce classification data for the document based on the features enhancing the metadata in the structured format and the content in the unstructured format (Fig. 6, Para. 39, “The reference document type 316 may also be used as a feature of classification machine learning models to facilitate classifying aspects of structured metadata 320, the reference document 310 and or the structured form representation 220.”. Para. 65, “At step 608, the pieces of metadata are classified into metadata components using one or more machine learning models. For example, the piece of metadata input into a field may be classified as a "value" component, the description of the field may be classified as a "description" component. The machine learning models may also classify pieces of metadata into categories (i.e., personal information, financial data, medical data, and the like) based on the type of information included in the piece of metadata. At 610, the machine learning models may cluster the metadata components according to the category corresponding to each component to aggregate metadata components having the same category. For example, all metadata components classified as personal information (i.e., name, address, social security number, etc.) may be clustered together. Metadata components may also be clustered based on a similarity to a particular criterion and or proximity to a particular position within the reference document. Based on the clustering, metadata components may be assembled into a structured metadata representation. The structured metadata may be a schema format of the pieces of metadata.”. Thus, classification data being produced for the document based on the features enhancing the metadata in the structured format and the content in the unstructured format.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Yanamandra by analyzing the elements of document such as the metadata of Yanamandra to generate features such that the received document can be accurately classified as suggested by BROYLES (Para. 45; 54). The extracted pieces of metadata obtained from the documents of Yanamandra can be classified into categories based on the type of information included in the piece of metadata. One of the ordinary skills in the art would have motivated to make this modification in order to organize structured metadata according to the reference document type associated with a particular reference document so that structured metadata can be queried using the reference document type as suggested by BROYLES (Para. 39). Combination of Yanamandra and BROYLES do not explicitly disclose wherein the content in the unstructured format includes one of natural language data, speech data, and video data; receive a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents, wherein the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format, and wherein during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories; wherein the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value, and the combination of the field and the value for accurately defining the document; provide the classification data to the trained machine learning agent, wherein the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document; and automatically classify the document by associating the document with the predicted category and store a classification record in memory. However, in the same field of endeavor, Srinivasulu discloses receive a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents (Para. 59, “The generated metadata profiles can then be fed into trained prediction module 106. In some embodiments, each individual metadata profile can be fed into the prediction module to generate quality predictions (e.g., a data category). Because embodiments use training data including metadata fields based on both an individual data value (e.g., for an instance/row of training data) as well as groups of data values (e.g., groups of data values for a plurality of instances in a given data category), trained prediction module 106 can be configured to use the features from the metadata profiles extracted from input data 102 to generate data category predictions.”. Thus, receive a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents.), wherein the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format (Para. 37, “training data 108 that is similar to an incoming batch of unstructured or unorganized data (e.g., input data 102) can be selected. In some embodiments, a previously trained neural network can be retrained (or updated) such that the training data that is similar to the batch of unstructured or unorganized data is used to retrain (or update) the model.”. Para. 56, “a trained prediction module 106 can be deployed to receive input data 102 (e.g., after being processed by processing module 104), and generate data quality predictions as output 110. For example, input data 102 can be a data dump from an unstructured data source, or can be any other suitable data. In some embodiments, input data 102 can represent data of a particular category, however this category may not be known. In some embodiments, input data 102 may represent data from a plurality of categories.”. Para. 73, “training data can be received from a structured data source (or multiple sources) such as a database, data warehouse, or any other suitable source.”. Thus, the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format.), and wherein during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories (Para. 75, “neural network can be trained using the generated metadata profiles and labeled data categories. In some embodiments, the neural network is trained off-line and is then received/loaded to generate data quality predictions. In some embodiments, the trained neural network is configured to predict a data category for incoming data. For example, training the neural network can include learning a set of parameters that are configured to discriminate among a plurality of data categories based on input features comprising metadata fields of the metadata profiles.”. Thus, during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories.); wherein the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value, and the combination of the field and the value for accurately defining the document (Para. 41, “metadata profiles can be generated for a set of training data from a known organizational structure (e.g., category). For example, fields for the metadata profiles can include data descriptive of an individual data value (e.g., a hash value of the data value, an individual data length, and the like) as well as data descriptive of other data values within the category (e.g., a mean length, a max length, and the like).”. Para. 67, “a probability criteria or confidence is generated for each category prediction. For example, for a given row/element of input data (e.g., a given input metadata profile), the trained model can generate a category prediction for the model and a probability/confidence for the predicted category (e.g., 50%, 60%, 70%, 80%, 90%, and the like).”. Para. 69, “data quality predictions that meet a threshold (e.g., threshold implemented by middle server 604) are displayed and/or the data quality predictions can be sorted based on probability (e.g., high to low).”. Thus, the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value, and the combination of the field and the value for accurately defining the document.); provide the classification data to the trained machine learning agent, wherein the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document (Para. 14, “Once trained, the machine learning model can be configured to receive input data (e.g., processed incoming data) and generate data quality improvement predictions. For example, the incoming data can be unstructured, lack consistency, or otherwise lack data properties related to quality. The incoming data can be processed, such as to extract features from the data and generate metadata profiles. In some embodiments, the processed incoming data (e.g., metadata profiles) can be fed to the trained machine learning model as input, and the model can generate predictions to improve the quality of the incoming data. For example, data categorization predictions can be generated that categorize this incoming data.”. Para. 59, “each individual metadata profile can be fed into the prediction module to generate quality predictions (e.g., a data category). Because embodiments use training data including metadata fields based on both an individual data value (e.g., for an instance/row of training data) as well as groups of data values (e.g., groups of data values for a plurality of instances in a given data category), trained prediction module 106 can be configured to use the features from the metadata profiles extracted from input data 102 to generate data category predictions.”. Thus, the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document.); and automatically classify the document by associating the document with the predicted category and store a classification record in memory (Para. 59, “use training data including metadata fields based on both an individual data value (e.g., for an instance/row of training data) as well as groups of data values (e.g., groups of data values for a plurality of instances in a given data category), trained prediction module 106 can be configured to use the features from the metadata profiles extracted from input data 102 to generate data category predictions.”. Para. 70, “structure can be added to the selected data (e.g., using predicted data categories), such as by storing the data in a structured manner (e.g., within a structured database or data schema). In an example, the selected data may be stored within an existing data schema that includes the predicted category for the data, thus allowing the selected data (which was previously unstructured) to be augmented and stored with structured data. In some embodiments, the data can be augmented automatically (e.g., without user confirmation), such as based on data quality predictions above a certain probability/confidence threshold (e.g., a category prediction above 90% ).”. Para. 28, “Database 217 can store data in an integrated collection of logically-related records or files. Database 217 can be an operational database, an analytical database, a data warehouse, a distributed database, an end-user database, an external database, a navigational database, an in-memory database”. Thus, automatically classify the document by associating the document with the predicted category and store a classification record in memory.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Srinivasulu into the combined method of Yanamandra and BROYLES by generating the probability value for the predicted category in order to classify the document of BROYLES as disclosed by Srinivasulu (Para. 67). The processed incoming data (e.g., metadata profiles) can be fed to the trained machine learning model as input, and the model can generate predictions to improve the quality of the incoming data. Data categorization predictions can be generated that categorize this incoming data (Srinivasulu, Para. 14). One of the ordinary skills in the art would have motivated to make this modification in order to improve the organization/structure of the data, and thus improve the data quality by using these category predictions as suggested by Srinivasulu (Para. 80). Combination of Yanamandra, BROYLES and Srinivasulu do not explicitly disclose wherein the content in the unstructured format includes one of natural language data, speech data, and video data. However, in the same field of endeavor, Swingler discloses wherein the content in the unstructured format includes one of natural language data, speech data, and video data (Col. 4 lines 54-67, “The unstructured data could include different types of data files (e.g., a PDF file, Word document, a JPEG file, etc.). The unstructured data may include any documentation related to an insurance claim and may include vehicle damage photos, correspondence emails regarding the claim, accident scene videos, photos of bills, photos of repair estimates, insurance files, audio files, video files, and the like. As described herein, unstructured data may include information that is not stored in pre-defined data format or is not organized in a pre-defined manner. The unstructured data may be stored in any file format, including image file format, video file format, audio file format, text file format, program file format, or compressed file format.”. Col. 5 lines 10-21, “The categorizing component 108 may process unstructured data and classify the data by insurance categories. In various examples, the categorizing component 108 may receive the document ("data" or "unstructured data") for a claimed incident (e.g., incident 114), generate a unique identifier for the unstructured data, and store an unprocessed version of the unstructured data according to a document retention policy. The categorizing component 108 may apply one or more document processing and/or machine learning models to perform information extraction on the unstructured data and generate corresponding metadata ("structured data" or "augmented data").”. Thus, the content in the unstructured format includes one of natural language data, speech data, and video data.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Swingler into the combined method of Yanamandra, BROYLES and Srinivasulu by processing the unstructured data of Swingler in the environment of Yanamandra in order to perform classification on input data such as the unstructured data as disclosed by Swingler (Col. 11 lines 40-55). The system may train and use one or more models to perform classification on the unstructured data. A trained model can comprise a classifier that is tasked with classifying unknown data (e.g., an unknown image) as one of a class label from multiple insurance categories such as labeling the image as an estimate, a bill, or a car (Col. 22 lines 1-9). One of the ordinary skills in the art would have motivated to make this modification in order to process unknown data with various types of content for determining the correct class label for the data by using the trained models with text recognition, image recognition, and other functionality as suggested by Swingler (Col. 22 lines 10-21). As to claim 13, Yanamandra discloses a method, comprising: receiving, by a processor (Para. 112), a document including at least one of structured data, semi- structured data, and unstructured data (Para. 45, content classification system 200 receives a merged mortgage document 208, i.e., receive a document, that includes images of unstructured document pages (unstructured document page images 210, 214), images of structured document pages (structured document page images 212, 216), images of unstructured document pages (unstructured document page images 214). Para. 36, “a classification system can process a merged document to classify the structured documents, unstructured and semi-structured documents that make up the merged document.”.), wherein the structured data includes metadata about the document in a structured format, the semi-structured data includes content of the document in an unstructured format and metadata about the document in the structured format and the unstructured data includes the content of the document in the unstructured format (Fig. 14, Para. 110, “Capture application 1412 separates the set of pages into a first set of classified documents (e.g., a set of classified structured documents) and a set of unclassified document pages (e.g., a set of unclassified unstructured or semi-structured page images), extracts metadata from the classified documents and stores the classified documents with metadata and unclassified document pages to content management system 1420.”. Para. 113, “each document and page managed by content server 1432 is associated with respective document or page metadata. The metadata may include an object identifier associated with each item managed by the content server 1432. In particular, in order to manage content in the content management system (e.g., as stored in data store 1434), content server 1432 may utilize one or more object identifiers, such as GUIDs to identify objects. Such object identifiers may be used throughout classification system 1400 to identify individual classified documents 1435, unclassified pages 1436 and separated documents 1438.”. Thus, the structured data includes metadata about the document in a structured format, the semi-structured data includes content of the document in an unstructured format and metadata about the document in the structured format and the unstructured data includes the content of the document in the unstructured format.). Yanamandra does not explicitly disclose wherein the content in the unstructured format includes one of natural language data, speech data, and video data; receiving, by the processor, a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents, wherein the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format, and wherein during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories; analyzing, by the processor, the metadata in the structured format generate features enhancing the metadata in the structured format, wherein the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value, and the combination of the field and the value for accurately defining the document; producing, by the processor, the classification data for the document based on the features enhancing the metadata in the structured format and the content in the unstructured format; providing, by the processor, the classification data to the trained machine learning agent, wherein the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document; and automatically classifying, by the processor, the document by associating the document with the predicted category and storing a classification record in memory. However, in the same field of endeavor, BROYLES discloses analyzing, by the processor, the metadata in the structured format generate features enhancing the metadata in the structured format (Para. 43, “To generate structured metadata 320, the classification layer 208 may use the structured form representation 220 to locate the pieces of metadata included in the reference document 310. The form components 224A, ..., 224N of the structured form representation 220 may be parsed to locate input fields (e.g., fields for inputting person information fields and wages) within the reference document 310. For example, the structured form representation 220 may be parsed to locate the fields within the W-2 form. The form components 224A, . . ., 224N of the structured form representation 220 may already be selected based on relevance to a particular context (e.g., preparing a text return). Therefore, the structured form representation 220 may be parsed to locate only the input fields containing metadata that is relevant to the particular context. The metadata input into the relevant fields in the reference document 310 are then extracted and assembled as structured metadata 320.”. Para. 65, “At step 608, the pieces of metadata are classified into metadata components using one or more machine learning models. For example, the piece of metadata input into a field may be classified as a "value" component, the description of the field may be classified as a "description" component. The machine learning models may also classify pieces of metadata into categories (i.e., personal information, financial data, medical data, and the like) based on the type of information included in the piece of metadata. At 610, the machine learning models may cluster the metadata components according to the category corresponding to each component to aggregate metadata components having the same category. For example, all metadata components classified as personal information (i.e., name, address, social security number, etc.) may be clustered together. Metadata components may also be clustered based on a similarity to a particular criterion and or proximity to a particular position within the reference document. Based on the clustering, metadata components may be assembled into a structured metadata representation. The structured metadata may be a schema format of the pieces of metadata.”. Thus, the metadata is being analyzed in the structured format to generate features enhancing the metadata in the structured format.); producing, by the processor, classification data for the document based on the features enhancing the metadata in the structured format and the content in the unstructured format (Fig. 6, Para. 39, “The reference document type 316 may also be used as a feature of classification machine learning models to facilitate classifying aspects of structured metadata 320, the reference document 310 and or the structured form representation 220.”. Para. 65, “At step 608, the pieces of metadata are classified into metadata components using one or more machine learning models. For example, the piece of metadata input into a field may be classified as a "value" component, the description of the field may be classified as a "description" component. The machine learning models may also classify pieces of metadata into categories (i.e., personal information, financial data, medical data, and the like) based on the type of information included in the piece of metadata. At 610, the machine learning models may cluster the metadata components according to the category corresponding to each component to aggregate metadata components having the same category. For example, all metadata components classified as personal information (i.e., name, address, social security number, etc.) may be clustered together. Metadata components may also be clustered based on a similarity to a particular criterion and or proximity to a particular position within the reference document. Based on the clustering, metadata components may be assembled into a structured metadata representation. The structured metadata may be a schema format of the pieces of metadata.”. Thus, classification data being produced for the document based on the features enhancing the metadata in the structured format and the content in the unstructured format.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Yanamandra by analyzing the elements of document such as the metadata of Yanamandra to generate features such that the received document can be accurately classified as suggested by BROYLES (Para. 45; 54). The extracted pieces of metadata obtained from the documents of Yanamandra can be classified into categories based on the type of information included in the piece of metadata. One of the ordinary skills in the art would have motivated to make this modification in order to organize structured metadata according to the reference document type associated with a particular reference document so that structured metadata can be queried using the reference document type as suggested by BROYLES (Para. 39). Combination of Yanamandra and BROYLES do not explicitly disclose wherein the content in the unstructured format includes one of natural language data, speech data, and video data; receiving, by the processor, a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents, wherein the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format, and wherein during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories; wherein the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value, and the combination of the field and the value for accurately defining the document; providing, by the processor, the classification data to the trained machine learning agent, wherein the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document; and automatically classifying, by the processor, the document by associating the document with the predicted category and storing a classification record in memory. However, in the same field of endeavor, Srinivasulu discloses receiving, by the processor, a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents (Para. 59, “The generated metadata profiles can then be fed into trained prediction module 106. In some embodiments, each individual metadata profile can be fed into the prediction module to generate quality predictions (e.g., a data category). Because embodiments use training data including metadata fields based on both an individual data value (e.g., for an instance/row of training data) as well as groups of data values (e.g., groups of data values for a plurality of instances in a given data category), trained prediction module 106 can be configured to use the features from the metadata profiles extracted from input data 102 to generate data category predictions.”. Thus, receive a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents.), wherein the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format (Para. 37, “training data 108 that is similar to an incoming batch of unstructured or unorganized data (e.g., input data 102) can be selected. In some embodiments, a previously trained neural network can be retrained (or updated) such that the training data that is similar to the batch of unstructured or unorganized data is used to retrain (or update) the model.”. Para. 56, “a trained prediction module 106 can be deployed to receive input data 102 (e.g., after being processed by processing module 104), and generate data quality predictions as output 110. For example, input data 102 can be a data dump from an unstructured data source, or can be any other suitable data. In some embodiments, input data 102 can represent data of a particular category, however this category may not be known. In some embodiments, input data 102 may represent data from a plurality of categories.”. Para. 73, “training data can be received from a structured data source (or multiple sources) such as a database, data warehouse, or any other suitable source.”. Thus, the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format.), and wherein during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories (Para. 75, “neural network can be trained using the generated metadata profiles and labeled data categories. In some embodiments, the neural network is trained off-line and is then received/loaded to generate data quality predictions. In some embodiments, the trained neural network is configured to predict a data category for incoming data. For example, training the neural network can include learning a set of parameters that are configured to discriminate among a plurality of data categories based on input features comprising metadata fields of the metadata profiles.”. Thus, during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories.); wherein the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value, and the combination of the field and the value for accurately defining the document (Para. 41, “metadata profiles can be generated for a set of training data from a known organizational structure (e.g., category). For example, fields for the metadata profiles can include data descriptive of an individual data value (e.g., a hash value of the data value, an individual data length, and the like) as well as data descriptive of other data values within the category (e.g., a mean length, a max length, and the like).”. Para. 67, “a probability criteria or confidence is generated for each category prediction. For example, for a given row/element of input data (e.g., a given input metadata profile), the trained model can generate a category prediction for the model and a probability/confidence for the predicted category (e.g., 50%, 60%, 70%, 80%, 90%, and the like).”. Para. 69, “data quality predictions that meet a threshold (e.g., threshold implemented by middle server 604) are displayed and/or the data quality predictions can be sorted based on probability (e.g., high to low).”. Thus, the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value, and the combination of the field and the value for accurately defining the document.); providing, by the processor, the classification data to the trained machine learning agent, wherein the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document (Para. 14, “Once trained, the machine learning model can be configured to receive input data (e.g., processed incoming data) and generate data quality improvement predictions. For example, the incoming data can be unstructured, lack consistency, or otherwise lack data properties related to quality. The incoming data can be processed, such as to extract features from the data and generate metadata profiles. In some embodiments, the processed incoming data (e.g., metadata profiles) can be fed to the trained machine learning model as input, and the model can generate predictions to improve the quality of the incoming data. For example, data categorization predictions can be generated that categorize this incoming data.”. Para. 59, “each individual metadata profile can be fed into the prediction module to generate quality predictions (e.g., a data category). Because embodiments use training data including metadata fields based on both an individual data value (e.g., for an instance/row of training data) as well as groups of data values (e.g., groups of data values for a plurality of instances in a given data category), trained prediction module 106 can be configured to use the features from the metadata profiles extracted from input data 102 to generate data category predictions.”. Thus, the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document.); and automatically classifying, by the processor, the document by associating the document with the predicted category and storing a classification record in memory (Para. 59, “use training data including metadata fields based on both an individual data value (e.g., for an instance/row of training data) as well as groups of data values (e.g., groups of data values for a plurality of instances in a given data category), trained prediction module 106 can be configured to use the features from the metadata profiles extracted from input data 102 to generate data category predictions.”. Para. 70, “structure can be added to the selected data (e.g., using predicted data categories), such as by storing the data in a structured manner (e.g., within a structured database or data schema). In an example, the selected data may be stored within an existing data schema that includes the predicted category for the data, thus allowing the selected data (which was previously unstructured) to be augmented and stored with structured data. In some embodiments, the data can be augmented automatically (e.g., without user confirmation), such as based on data quality predictions above a certain probability/confidence threshold (e.g., a category prediction above 90% ).”. Para. 28, “Database 217 can store data in an integrated collection of logically-related records or files. Database 217 can be an operational database, an analytical database, a data warehouse, a distributed database, an end-user database, an external database, a navigational database, an in-memory database”. Thus, automatically classify the document by associating the document with the predicted category and store a classification record in memory.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Srinivasulu into the combined method of Yanamandra and BROYLES by generating the probability value for the predicted category in order to classify the document of BROYLES as disclosed by Srinivasulu (Para. 67). The processed incoming data (e.g., metadata profiles) can be fed to the trained machine learning model as input, and the model can generate predictions to improve the quality of the incoming data. Data categorization predictions can be generated that categorize this incoming data (Srinivasulu, Para. 14). One of the ordinary skills in the art would have motivated to make this modification in order to improve the organization/structure of the data, and thus improve the data quality by using these category predictions as suggested by Srinivasulu (Para. 80). Combination of Yanamandra, BROYLES and Srinivasulu do not explicitly disclose wherein the content in the unstructured format includes one of natural language data, speech data, and video data. However, in the same field of endeavor, Swingler discloses wherein the content in the unstructured format includes one of natural language data, speech data, and video data (Col. 4 lines 54-67, “The unstructured data could include different types of data files (e.g., a PDF file, Word document, a JPEG file, etc.). The unstructured data may include any documentation related to an insurance claim and may include vehicle damage photos, correspondence emails regarding the claim, accident scene videos, photos of bills, photos of repair estimates, insurance files, audio files, video files, and the like. As described herein, unstructured data may include information that is not stored in pre-defined data format or is not organized in a pre-defined manner. The unstructured data may be stored in any file format, including image file format, video file format, audio file format, text file format, program file format, or compressed file format.”. Col. 5 lines 10-21, “The categorizing component 108 may process unstructured data and classify the data by insurance categories. In various examples, the categorizing component 108 may receive the document ("data" or "unstructured data") for a claimed incident (e.g., incident 114), generate a unique identifier for the unstructured data, and store an unprocessed version of the unstructured data according to a document retention policy. The categorizing component 108 may apply one or more document processing and/or machine learning models to perform information extraction on the unstructured data and generate corresponding metadata ("structured data" or "augmented data").”. Thus, the content in the unstructured format includes one of natural language data, speech data, and video data.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Swingler into the combined method of Yanamandra, BROYLES and Srinivasulu by processing the unstructured data of Swingler in the environment of Yanamandra in order to perform classification on input data such as the unstructured data as disclosed by Swingler (Col. 11 lines 40-55). The system may train and use one or more models to perform classification on the unstructured data. A trained model can comprise a classifier that is tasked with classifying unknown data (e.g., an unknown image) as one of a class label from multiple insurance categories such as labeling the image as an estimate, a bill, or a car (Col. 22 lines 1-9). One of the ordinary skills in the art would have motivated to make this modification in order to process unknown data with various types of content for determining the correct class label for the data by using the trained models with text recognition, image recognition, and other functionality as suggested by Swingler (Col. 22 lines 10-21). As to claim 18, Yanamandra discloses a non-transitory, computer-readable medium comprising a set of instructions stored therein which when executed by a processor (Para. 112), causes the processor to: receive a document including at least one of structured data, semi-structured data, and unstructured data (Para. 45, content classification system 200 receives a merged mortgage document 208, i.e., receive a document, that includes images of unstructured document pages (unstructured document page images 210, 214), images of structured document pages (structured document page images 212, 216), images of unstructured document pages (unstructured document page images 214). Para. 36, “a classification system can process a merged document to classify the structured documents, unstructured and semi-structured documents that make up the merged document.”.), wherein the structured data includes metadata about the document in a structured format, the semi-structured data includes content of the document in an unstructured format and metadata about the document in the structured format and the unstructured data includes the content of the document in the unstructured format (Fig. 14, Para. 110, “Capture application 1412 separates the set of pages into a first set of classified documents (e.g., a set of classified structured documents) and a set of unclassified document pages (e.g., a set of unclassified unstructured or semi-structured page images), extracts metadata from the classified documents and stores the classified documents with metadata and unclassified document pages to content management system 1420.”. Para. 113, “each document and page managed by content server 1432 is associated with respective document or page metadata. The metadata may include an object identifier associated with each item managed by the content server 1432. In particular, in order to manage content in the content management system (e.g., as stored in data store 1434), content server 1432 may utilize one or more object identifiers, such as GUIDs to identify objects. Such object identifiers may be used throughout classification system 1400 to identify individual classified documents 1435, unclassified pages 1436 and separated documents 1438.”. Thus, the structured data includes metadata about the document in a structured format, the semi-structured data includes content of the document in an unstructured format and metadata about the document in the structured format and the unstructured data includes the content of the document in the unstructured format.). Yanamandra does not explicitly disclose wherein the content in the unstructured format includes one of natural language data, speech data, and video data; receive a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents, wherein the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format, and wherein during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories; analyze the metadata in the structured format to generate features enhancing the metadata in the structured format, wherein the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value and the combination of the field and the value for accurately defining the document; produce the classification data for the document based on the features enhancing the metadata in the structured format and the content in the unstructured format; provide the classification data to the trained machine learning agent, wherein the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document; and automatically classify the document by associating the document with the predicted category and store a classification record in memory. However, in the same field of endeavor, BROYLES discloses analyze the metadata in the structured format to generate features enhancing the metadata in the structured format (Para. 43, “To generate structured metadata 320, the classification layer 208 may use the structured form representation 220 to locate the pieces of metadata included in the reference document 310. The form components 224A, ..., 224N of the structured form representation 220 may be parsed to locate input fields (e.g., fields for inputting person information fields and wages) within the reference document 310. For example, the structured form representation 220 may be parsed to locate the fields within the W-2 form. The form components 224A, . . ., 224N of the structured form representation 220 may already be selected based on relevance to a particular context (e.g., preparing a text return). Therefore, the structured form representation 220 may be parsed to locate only the input fields containing metadata that is relevant to the particular context. The metadata input into the relevant fields in the reference document 310 are then extracted and assembled as structured metadata 320.”. Para. 65, “At step 608, the pieces of metadata are classified into metadata components using one or more machine learning models. For example, the piece of metadata input into a field may be classified as a "value" component, the description of the field may be classified as a "description" component. The machine learning models may also classify pieces of metadata into categories (i.e., personal information, financial data, medical data, and the like) based on the type of information included in the piece of metadata. At 610, the machine learning models may cluster the metadata components according to the category corresponding to each component to aggregate metadata components having the same category. For example, all metadata components classified as personal information (i.e., name, address, social security number, etc.) may be clustered together. Metadata components may also be clustered based on a similarity to a particular criterion and or proximity to a particular position within the reference document. Based on the clustering, metadata components may be assembled into a structured metadata representation. The structured metadata may be a schema format of the pieces of metadata.”. Thus, the metadata is being analyzed in the structured format to generate features enhancing the metadata in the structured format.); produce the classification data for the document based on the features enhancing the metadata in the structured format and the content in the unstructured format (Fig. 6, Para. 39, “The reference document type 316 may also be used as a feature of classification machine learning models to facilitate classifying aspects of structured metadata 320, the reference document 310 and or the structured form representation 220.”. Para. 65, “At step 608, the pieces of metadata are classified into metadata components using one or more machine learning models. For example, the piece of metadata input into a field may be classified as a "value" component, the description of the field may be classified as a "description" component. The machine learning models may also classify pieces of metadata into categories (i.e., personal information, financial data, medical data, and the like) based on the type of information included in the piece of metadata. At 610, the machine learning models may cluster the metadata components according to the category corresponding to each component to aggregate metadata components having the same category. For example, all metadata components classified as personal information (i.e., name, address, social security number, etc.) may be clustered together. Metadata components may also be clustered based on a similarity to a particular criterion and or proximity to a particular position within the reference document. Based on the clustering, metadata components may be assembled into a structured metadata representation. The structured metadata may be a schema format of the pieces of metadata.”. Thus, classification data being produced for the document based on the features enhancing the metadata in the structured format and the content in the unstructured format.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to have modified the system of Yanamandra by analyzing the elements of document such as the metadata of Yanamandra to generate features such that the received document can be accurately classified as suggested by BROYLES (Para. 45; 54). The extracted pieces of metadata obtained from the documents of Yanamandra can be classified into categories based on the type of information included in the piece of metadata. One of the ordinary skills in the art would have motivated to make this modification in order to organize structured metadata according to the reference document type associated with a particular reference document so that structured metadata can be queried using the reference document type as suggested by BROYLES (Para. 39). Combination of Yanamandra and BROYLES do not explicitly disclose wherein the content in the unstructured format includes one of natural language data, speech data, and video data; receive a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents,wherein the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format, andwherein during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories; wherein the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value and the combination of the field and the value for accurately defining the document; provide the classification data to the trained machine learning agent, wherein the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document; and automatically classify the document by associating the document with the predicted category and store a classification record in memory. However, in the same field of endeavor, Srinivasulu discloses receive a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents (Para. 59, “The generated metadata profiles can then be fed into trained prediction module 106. In some embodiments, each individual metadata profile can be fed into the prediction module to generate quality predictions (e.g., a data category). Because embodiments use training data including metadata fields based on both an individual data value (e.g., for an instance/row of training data) as well as groups of data values (e.g., groups of data values for a plurality of instances in a given data category), trained prediction module 106 can be configured to use the features from the metadata profiles extracted from input data 102 to generate data category predictions.”. Thus, receive a trained machine learning agent that is configured to predict a category for classification data for a plurality of documents.), wherein the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format (Para. 37, “training data 108 that is similar to an incoming batch of unstructured or unorganized data (e.g., input data 102) can be selected. In some embodiments, a previously trained neural network can be retrained (or updated) such that the training data that is similar to the batch of unstructured or unorganized data is used to retrain (or update) the model.”. Para. 56, “a trained prediction module 106 can be deployed to receive input data 102 (e.g., after being processed by processing module 104), and generate data quality predictions as output 110. For example, input data 102 can be a data dump from an unstructured data source, or can be any other suitable data. In some embodiments, input data 102 can represent data of a particular category, however this category may not be known. In some embodiments, input data 102 may represent data from a plurality of categories.”. Para. 73, “training data can be received from a structured data source (or multiple sources) such as a database, data warehouse, or any other suitable source.”. Thus, the trained machine learning agent having been trained with training data including features enhancing the metadata in the structured format and the content in the unstructured format.), and wherein during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories (Para. 75, “neural network can be trained using the generated metadata profiles and labeled data categories. In some embodiments, the neural network is trained off-line and is then received/loaded to generate data quality predictions. In some embodiments, the trained neural network is configured to predict a data category for incoming data. For example, training the neural network can include learning a set of parameters that are configured to discriminate among a plurality of data categories based on input features comprising metadata fields of the metadata profiles.”. Thus, during training, the trained machine learning agent learns a set a parameters configured to discriminate among a plurality of categories.); wherein the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value and the combination of the field and the value for accurately defining the document (Para. 41, “metadata profiles can be generated for a set of training data from a known organizational structure (e.g., category). For example, fields for the metadata profiles can include data descriptive of an individual data value (e.g., a hash value of the data value, an individual data length, and the like) as well as data descriptive of other data values within the category (e.g., a mean length, a max length, and the like).”. Para. 67, “a probability criteria or confidence is generated for each category prediction. For example, for a given row/element of input data (e.g., a given input metadata profile), the trained model can generate a category prediction for the model and a probability/confidence for the predicted category (e.g., 50%, 60%, 70%, 80%, 90%, and the like).”. Para. 69, “data quality predictions that meet a threshold (e.g., threshold implemented by middle server 604) are displayed and/or the data quality predictions can be sorted based on probability (e.g., high to low).”. Thus, the features enhancing the metadata include a field, a value, and a combination of the field and the value evaluated to determine a probability value for each of the field, the value, and the combination of the field and the value for accurately defining the document.); provide the classification data to the trained machine learning agent, wherein the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document (Para. 14, “Once trained, the machine learning model can be configured to receive input data (e.g., processed incoming data) and generate data quality improvement predictions. For example, the incoming data can be unstructured, lack consistency, or otherwise lack data properties related to quality. The incoming data can be processed, such as to extract features from the data and generate metadata profiles. In some embodiments, the processed incoming data (e.g., metadata profiles) can be fed to the trained machine learning model as input, and the model can generate predictions to improve the quality of the incoming data. For example, data categorization predictions can be generated that categorize this incoming data.”. Para. 59, “each individual metadata profile can be fed into the prediction module to generate quality predictions (e.g., a data category). Because embodiments use training data including metadata fields based on both an individual data value (e.g., for an instance/row of training data) as well as groups of data values (e.g., groups of data values for a plurality of instances in a given data category), trained prediction module 106 can be configured to use the features from the metadata profiles extracted from input data 102 to generate data category predictions.”. Thus, the trained machine learning agent applies a transformation using learned parameters values to generate one or more predicted categories for the document.); and automatically classify the document by associating the document with the predicted category and store a classification record in memory (Para. 59, “use training data including metadata fields based on both an individual data value (e.g., for an instance/row of training data) as well as groups of data values (e.g., groups of data values for a plurality of instances in a given data category), trained prediction module 106 can be configured to use the features from the metadata profiles extracted from input data 102 to generate data category predictions.”. Para. 70, “structure can be added to the selected data (e.g., using predicted data categories), such as by storing the data in a structured manner (e.g., within a structured database or data schema). In an example, the selected data may be stored within an existing data schema that includes the predicted category for the data, thus allowing the selected data (which was previously unstructured) to be augmented and stored with structured data. In some embodiments, the data can be augmented automatically (e.g., without user confirmation), such as based on data quality predictions above a certain probability/confidence threshold (e.g., a category prediction above 90% ).”. Para. 28, “Database 217 can store data in an integrated collection of logically-related records or files. Database 217 can be an operational database, an analytical database, a data warehouse, a distributed database, an end-user database, an external database, a navigational database, an in-memory database”. Thus, automatically classify the document by associating the document with the predicted category and store a classification record in memory.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Srinivasulu into the combined method of Yanamandra and BROYLES by generating the probability value for the predicted category in order to classify the document of BROYLES as disclosed by Srinivasulu (Para. 67). The processed incoming data (e.g., metadata profiles) can be fed to the trained machine learning model as input, and the model can generate predictions to improve the quality of the incoming data. Data categorization predictions can be generated that categorize this incoming data (Srinivasulu, Para. 14). One of the ordinary skills in the art would have motivated to make this modification in order to improve the organization/structure of the data, and thus improve the data quality by using these category predictions as suggested by Srinivasulu (Para. 80). Combination of Yanamandra, BROYLES and Srinivasulu do not explicitly disclose wherein the content in the unstructured format includes one of natural language data, speech data, and video data. However, in the same field of endeavor, Swingler discloses wherein the content in the unstructured format includes one of natural language data, speech data, and video data (Col. 4 lines 54-67, “The unstructured data could include different types of data files (e.g., a PDF file, Word document, a JPEG file, etc.). The unstructured data may include any documentation related to an insurance claim and may include vehicle damage photos, correspondence emails regarding the claim, accident scene videos, photos of bills, photos of repair estimates, insurance files, audio files, video files, and the like. As described herein, unstructured data may include information that is not stored in pre-defined data format or is not organized in a pre-defined manner. The unstructured data may be stored in any file format, including image file format, video file format, audio file format, text file format, program file format, or compressed file format.”. Col. 5 lines 10-21, “The categorizing component 108 may process unstructured data and classify the data by insurance categories. In various examples, the categorizing component 108 may receive the document ("data" or "unstructured data") for a claimed incident (e.g., incident 114), generate a unique identifier for the unstructured data, and store an unprocessed version of the unstructured data according to a document retention policy. The categorizing component 108 may apply one or more document processing and/or machine learning models to perform information extraction on the unstructured data and generate corresponding metadata ("structured data" or "augmented data").”. Thus, the content in the unstructured format includes one of natural language data, speech data, and video data.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Swingler into the combined method of Yanamandra, BROYLES and Srinivasulu by processing the unstructured data of Swingler in the environment of Yanamandra in order to perform classification on input data such as the unstructured data as disclosed by Swingler (Col. 11 lines 40-55). The system may train and use one or more models to perform classification on the unstructured data. A trained model can comprise a classifier that is tasked with classifying unknown data (e.g., an unknown image) as one of a class label from multiple insurance categories such as labeling the image as an estimate, a bill, or a car (Col. 22 lines 1-9). One of the ordinary skills in the art would have motivated to make this modification in order to process unknown data with various types of content for determining the correct class label for the data by using the trained models with text recognition, image recognition, and other functionality as suggested by Swingler (Col. 22 lines 10-21). As to claims 2, 14 and 19, the claims are rejected for the same reasons as claims 1, 13, and 18 above. In addition, Srinivasulu discloses wherein the content of the document in the unstructured format is used in determining the probability value (Para. 70, Based on the confirmation, the selected data, i.e., classification data, can incorporate the augmented quality. For example, structure can be added to the selected data (e.g., using predicted data categories), such as by storing the data in a structured manner (e.g., within a structured database or data schema), i.e., a database structure. In an example, the selected data may be stored within an existing data schema that includes the predicted category for the data, thus allowing the selected data (which was previously unstructured) to be augmented and stored with structured data. In some embodiments, the data can be augmented automatically (e.g., without user confirmation), such as based on data quality predictions above a certain probability/confidence threshold (e.g., a category prediction above 90%). Thus, the classification data being stored in a database structure. Thus, the content of the document in the unstructured format is used in determining the probability value.). As to claims 3, 15 and 20, the claims are rejected for the same reasons as claims 1, 13, and 19 above. In addition, BROYLES discloses wherein the features enhancing the metadata are evaluated from the metadata based on a continuum of a numerical value of the field in the metadata instead of the numerical value of the field itself (Para. 29, “A form component 224A may include other form components. For example, as illustrated in FIG. 4B, a field 406 may include a field number 408 indicating the order in which the field 406 appears in the form (i.e., if the form includes 32 fields that are located above, to the left of, or otherwise before field A the field number 408 for field A is 33).”. Para. 33, “the features 232 used by the field form model to determine if the fields included in the document are relevant to a particular context (e.g., filing a tax form, filling out a medical record, and the like) may be generated by executing one or more machine learning algorithms on a fields training dataset including a plurality of documents having fields labeled as relevant to a particular context or not relevant to a particular context. The features 232 generated from the training dataset may include vectors and or other numerical representations of, for example, locations of fields within the document, type of document including the field, the types of data the fields, the content entered into the fields, statistical data of how frequently the field was left blank and or filled in, and other characteristics of fields that are labeled as relevant to one or more use case contexts in the training data.”. Thus, the features enhancing the metadata are evaluated from the metadata based on a continuum of a numerical value of a field in the metadata instead of the numerical value of the field itself.). As to claims 4, 16 and 21, the claims are rejected for the same reasons as claims 1, 13 and 18 above. In addition, BROYLES discloses wherein the features enhancing the metadata are evaluated from the metadata based on a proximity of each of the features enhancing the metadata to other features enhancing the metadata (Para. 27, “The locations of particular document elements may be used to calculate distances between the document elements 212A, ..., 212N. The distances between the document elements 212A, ..., 212N may be used, for example, to construct the structured form representation 220. The distances between the document elements 212A, ..., 212N may also be used as features of the form models 230A, ..., 230N and or other machine learning models to facilitate extracting information form a particular location in the form and or determining whether the extracted information is relevant to a particular context.”. Para. 36, “A distance separating the form components 224A, ..., 224N within the feature space may be calculated. The form components that have a distance that is less than a threshold distance of one or more other form components 224A, ..., 224N, a defined position in the features space, and or a particular form component that is known to be relevant may be determined to be relevant to a particular context and may included in the structured form representation.”. Thus, the features enhancing the metadata are evaluated from the metadata based on a proximity of each of the features enhancing the metadata to other features enhancing the metadata.). As to claim 8, the claim is rejected for the same reasons as claim 2 above. In addition, BROYLES discloses wherein the fields include at least one of a location of the document, a type of document and an author of the document (Para. 33, “The features 232 generated from the training dataset may include vectors and or other numerical representations of, for example, locations of fields within the document, type of document including the field, the types of data the fields, the content entered into the fields, statistical data of how frequently the field was left blank and or filled in, and other characteristics of fields that are labeled as relevant to one or more use case contexts in the training data.”. Thus, the fields include at least one of a location of the document, a type of document and an author of the document.). 6. Claims 5, 9, 17 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Yanamandra, BROYLES, Srinivasulu and Swingler as applied above and further in view of DESAI et al. (previously presented) (US 2022/0179906 A1) hereinafter DESAI. As to claims 5, 17 and 22, the claims are rejected for the same reasons as claims 1, 13 and 18 above. Combination of Yanamandra, BROYLES, Srinivasulu and Swingler do not disclose wherein the features enhancing the metadata are evaluated from the metadata based on weights assigned to each of the features enhancing the metadata. However, in the same field of endeavor, DESAI discloses wherein the features enhancing the metadata are evaluated from the metadata based on weights assigned to each of the features enhancing the metadata (Para. 34, “the predetermined weights assigned to each feature of the set of features of the respective segment and validating the assigned labels by comparing the metadata for each document to the assigned labels of the respective document.”. Para. 35, “The server may break down each of the segments of the document into a set of features. The server may assign a part-of-speech tag to the one or more words corresponding to each respective segment of the document based on predetermined weights assigned to each feature of the set of features of the respective segment. The server may assign a dependency label to the one or more words corresponding to each respective segment based on the part-of speech tag assigned to the respective one or more words and the predetermined weights assigned to each feature of the set of features of the respective segment.”. Thus, the features enhancing the metadata are evaluated from the metadata based on weights assigned to each of the features enhancing the metadata.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of DESAI into the combined method of Yanamandra, BROYLES, Srinivasulu and Swingler by assigning the weight to each feature such as the features of BROYLES in order to classify the document corresponding to the domain using the trained learning model as disclosed by DESAI (Para. 35). The server may assign a Name Entity Relationship (NER) label from the set of predefined labels corresponding to the domain to the one or more words corresponding to each respective segment based on the part-of-speech tag and dependency tag assigned to the respective one or more words, and the predetermined weights assigned to each feature of the set of features of the respective segment (DESAI, Para. 35). One of the ordinary skills in the art would have motivated to make this modification in order to classify the document effectively and efficiently based on each of the NER labels assigned to the words in the document as suggested by DESAI (Para. 30; 35). As to claim 9, the claim is rejected for the same reasons as claim 1 above. In addition, DESAI discloses wherein the processor is further caused to assign a priority value to the features enhancing the metadata (Para. 35, “the server may assign a Name Entity Relationship (NER) label from the set of predefined labels corresponding to the domain to the one or more words corresponding to each respective segment based on the part-of-speech tag and dependency tag assigned to the respective one or more words, and the predetermined weights assigned to each feature of the set of features of the respective segment.”. Para. 104, “Based on the validation of the assigned parts-of-speech and dependency tags and NER labels, the learning model may determine whether the accuracy of the learning model's assignment of the assigned parts-of-speech and dependency tags and NER labels meets a threshold. The threshold may be preprogrammed or may be provided in the request to train the learning model.”. Thus, the processor is further caused to assign a priority value to the features enhancing the metadata.). 7. Claims 10-12 are rejected under 35 U.S.C. 103 as being unpatentable over Yanamandra, BROYLES, Srinivasulu and Swingler as applied above and further in view of Park (previously presented) (US 9,477,756 B1) hereinafter Park. As to claim 10, the claim is rejected for the same reasons as claim 1 above. Combination of Yanamandra, BROYLES, Srinivasulu and Swingler do not disclose wherein the processor is further caused to: compare one trained machine learning agent of a plurality of trained machine learning agents to one or more other trained machine learning agents of the plurality of trained machine learning agents to determine an overall mapping of the plurality of categories. However, in the same field of endeavor, Park discloses wherein the processor is further caused to: compare one trained machine learning agent of a plurality of trained machine learning agents to one or more other trained machine learning agents of the plurality of trained machine learning agents to determine an overall mapping of the plurality of categories (Col. 10 line 8-21, the document classification module 104 presents the text string representing the structure of the received document 106 to the classifier 108 for a first class or category of documents. As described herein, the classifier 108 may be trained from training documents 112 labeled as belonging to a number of different classes or categories, i.e., plurality of trained machine learning agents. The document classification module 104 may select a first of the known document classes and present the text string representing the structure of the document 106 to the classifier 108 for the selected class. According to embodiments, the classifier 108 utilizes the classifier algorithm to calculate a probability that the received document 106 belongs to the selected document class based on observances in the training documents 112 labeled as belonging to that class. Col. 10 line 55-66, “If the calculated probability of the received document 106 belonging to the selected document class does exceed the threshold value, then the routine 500 proceeds from operation 512 to operation 516, where the document classification module 104 classifies the received document 106 as belonging to the selected document class. Alternatively, the document classification module 104 may present the text string representing the structure of the received document 106 to the classifier 108 for each of the known document classes, and select the document class having the highest calculated probability as determined by the classification for the received document.”.). Therefore, it would have been obvious for one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the teaching of Park into the combined method of Yanamandra, BROYLES, Srinivasulu and Swingler by analyzing documents in order to determine a category to which the document belongs based on the observed features in the document as disclosed by Park (Col. 3 line 40-46). The document classification module trains the classifier utilizing a number of training documents such as the document of Yanamandra that have already been labeled with a document class or category. One of the ordinary skills in the art would have motivated to make this modification in order to provide more accurate classification of documents by using N-grams comprising sequences of HTML tags extracted from the text string representing the document as suggested by Park (Col. 5 line 66-67; Col. 6 line 44-45). As to claim 11, the claim is rejected for the same reasons as claim 10 above. In addition, Park discloses wherein the processor is further caused to: compare a new document to the plurality of trained machine learning agents; determine if the new document matches one or more of the categories represented by the plurality of trained machine learning agents; and if the new document does not match one or more of the categories represented by the plurality of trained machine learning agents, create a new trained machine learning agent for a new category represented by the new document (Col. 10 line 39-54, The routine 500 proceeds from operation 510 to operation 512, where the document classification module 104 determines if the calculated probability of the received document 106 belonging to the selected document class exceeds some threshold value. The threshold value may be set high enough to ensure that the classification of the received document 106 is performed with a high degree of certainty, such as 80%. If the calculated probability of the received document 106, i.e., the new document, belonging to the selected document class does not exceed the threshold value, then the routine 500 proceeds from operation 512 to operation 514, where the document classification module 104 presents the text string representing the structure of the received document 106 to the classifier 108 for a next class or category of documents, i.e., a new agent, using the same methodology as described above in regard to operation 510. Col. 8 line 25-39, “The document 106 may be labeled by a combination of textual content analysis performed by the document classification module 104 or other module or process and/or manual analysis of the document's contents, as described above. In some embodiments, the document classification module 104 receives a number of training documents 112 labeled as to class or category from which to train the classifier 108. The routine 400 proceeds from operation 402 to operation 404, where the document classification module 104 parses the structural elements from the received structured document 106. For example, as described above in regard to FIG. 2, the document classification module 104 may parse the HTML tags from the document 106 comprising an HTML-based email message identified as a "marketing" document.”.). As to claim 12, the claim is rejected for the same reasons as claim 10 above. In addition, Park discloses wherein the processor is further caused to; compare one trained machine learning agent of the plurality of trained machine learning agents to a plurality of new documents; and determine which new documents to the plurality of new documents best match the category represented by the one trained machine learning agent (Col. 10 line 55-66, “If the calculated probability of the received document 106 belonging to the selected document class does exceed the threshold value, then the routine 500 proceeds from operation 512 to operation 516, where the document classification module 104 classifies the received document 106 as belonging to the selected document class. Alternatively, the document classification module 104 may present the text string representing the structure of the received document 106 to the classifier 108 for each of the known document classes, and select the document class having the highest calculated probability as determined by the classification for the received document.”. Thus, the processor is further caused to; compare one agent of the plurality of agents to a plurality of new documents; and determine which new documents to the plurality of new documents best match the category represented by the one agent.). Response to Arguments 8. Applicant arguments regarding 101 rejections, applicant argues that the amended independent claims are directed to statutory subject matter and comply with the requirements set forth under 35 U.S.C. 101. However, examiner respectfully responds that the additional elements provide nothing more than mere instructions to implement an abstract idea on a generic computer. The trained machine learning agent is used to generally apply the abstract idea without placing any limits on how the trained machine learning agent functions. Rather, these limitations only recite the outcome of “generate one or more predicted categories for the document” and do not include any details about how the generating step is accomplished. See MPEP 2106.05(f). However, the inclusion of generic computer components being used to implement the abstract idea do not amount to significantly more than the abstract idea. Thus, the claims 1-5 and 8-22 are directed to the abstract idea. Applicant’s arguments, regarding depending claims 2, 14 and 19, applicant argues that “the cited references fail to disclose, teach or suggest "wherein the content of the document in the unstructured format is used in determining the probability value”. However, the prior art, Srinivasulu discloses in [Para. 70], “the selected data may be stored within an existing data schema that includes the predicted category for the data, thus allowing the selected data (which was previously unstructured) to be augmented and stored with structured data. In some embodiments, the data can be augmented automatically (e.g., without user confirmation), such as based on data quality predictions above a certain probability/confidence threshold (e.g., a category prediction above 90%)”. Applicant's arguments, see pages 8-11, with respect to the rejections of claims 1-5 and 8-22 under 35 USC §103 have been considered but are moot in view of the new ground(s) of rejection necessitated by applicant's amendments as set forth in the respective rejections of claims 1-5 and 8-22 under 35 USC §103 above in view of the newly found reference. Conclusion 9. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. RAHMAN et al. (US 2023/0394235 A1) teaches automatically validating documents having natural language text. Ravi (US 2023/0267373 A1) teaches universal training and deployment of machine learning models. 10. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MOHAMMAD SOLAIMAN BHUYAN whose telephone number is (571)272-7843. The examiner can normally be reached on Monday - Friday 9:00am-5:00pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Charles Rones can be reached on 571-272-4085. The fax phone number for the organization where this application or proceeding is assigned is 571 -273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MOHAMMAD S BHUYAN/Examiner, Art Unit 2168 /CHARLES RONES/Supervisory Patent Examiner, Art Unit 2168
Read full office action

Prosecution Timeline

Oct 13, 2023
Application Filed
Aug 27, 2024
Non-Final Rejection — §101, §103
Oct 30, 2024
Examiner Interview Summary
Oct 30, 2024
Applicant Interview (Telephonic)
Dec 02, 2024
Response Filed
Mar 08, 2025
Final Rejection — §101, §103
May 14, 2025
Response after Non-Final Action
Jun 24, 2025
Request for Continued Examination
Jun 26, 2025
Response after Non-Final Action
Jul 26, 2025
Non-Final Rejection — §101, §103
Sep 17, 2025
Applicant Interview (Telephonic)
Sep 17, 2025
Examiner Interview Summary
Oct 30, 2025
Response Filed
Feb 05, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12530370
INCREASING FAULT TOLERANCE IN A MULTI-NODE REPLICATION SYSTEM
2y 5m to grant Granted Jan 20, 2026
Patent 12517883
DATABASE INDEXING IN PERFORMANCE MEASUREMENT SYSTEMS
2y 5m to grant Granted Jan 06, 2026
Patent 12499136
METHOD FOR UPDATING A DATABASE OF A GEOLOCATION SERVER
2y 5m to grant Granted Dec 16, 2025
Patent 12493613
METHOD AND APPRATUS FOR PROVING A SHARED DATABASE CONNECTION IN A BATCH ENVIRONMENT
2y 5m to grant Granted Dec 09, 2025
Patent 12493589
Efficient Construction and Querying Progress of a Concurrent Migration
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+22.8%)
2y 5m
Median Time to Grant
High
PTA Risk
Based on 164 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month