Prosecution Insights
Last updated: April 19, 2026
Application No. 18/482,754

DATA EXTRACTION AND ANALYSIS FROM UNSTRUCTURED DOCUMENTS

Non-Final OA §103
Filed
Oct 06, 2023
Examiner
SMITH, SEAN THOMAS
Art Unit
2659
Tech Center
2600 — Communications
Assignee
Adobe Inc.
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 8m
To Grant
99%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
5 granted / 6 resolved
+21.3% vs TC avg
Strong +33% interview lift
Without
With
+33.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
37 currently pending
Career history
43
Total Applications
across all art units

Statute-Specific Performance

§101
27.9%
-12.1% vs TC avg
§103
50.7%
+10.7% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 6 resolved cases

Office Action

§103
DETAILED ACTION This Office Action is responsive to amendments and arguments filed on January 13th, 2026. Claims 1-2, 10 and 15 are amended, claims 1-20 are pending and have been examined. Any objections/rejections not mentioned in this Office Action have been withdrawn by the Examiner. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Priority This application continues from prior Application No. 18/185,547, filed March 17th, 2023, and as such, is granted the benefit of the earlier filing date. Information Disclosure Statement The information disclosure statements (IDS) submitted on October 6 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Response to Amendments and Arguments Regarding rejections made under 35 U.S.C. 103, Applicant argues, “The Office Action relies on Pouramini, pages 5-6, to teach creating region patterns. However, the Applicant respectfully submits that, unlike the cited prior art, the present invention teaches different rules that are based on different relationship types of the plurality of flexible anchor element [sic] with the common information element across a plurality of documents,” (page 10 of Remarks) and “Pouramini teaches that users can find anchors with different kinds of scanning of the HTML page, but does not include multiple relationship types between the flexible anchor and the common information element,” (page 11 of Remarks). Applicant’s argument is persuasive, in that Pouramini may not be relied upon to teach the particular limitations of the amended claim; however, new grounds of rejection are made under 35 U.S.C. 103 in view of U.S. Patent 8,954,438 to Mao et al. Further details are provided below. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 3-11 and 13-20 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Application Publication 2021/0264208 to Damodaran (hereinafter, "Damodaran") in view of U.S. Patent 8,954,438 to Mao et al. (hereinafter, “Mao”) and U.S. Patent Application Publication 2019/0384807 to Dernoncourt et al. (hereinafter, “Dernoncourt”). Regarding claims 1, 10 and 15, Damodaran teaches a method, computer-readable medium and system comprising: obtaining a plurality of documents and a query, wherein the query indicates a common information element located in each of the plurality of documents (paragraph [0024], "The information server 202 is configured to receive instruction from the client device 204 for extract information from one or more documents. For example, the client device 204 can provide the information server 202 with a copy of a company's annual report, and the information server 202 can analyze the annual report document to extract fields (such as, chief executive officer (CEO), company name, total assets, etc.)."; extracting, using the machine learning model, the common information element from each of the plurality of documents based on the plurality of rules (paragraph [0024], "The information server 202 is configured to receive instruction from the client device 204 for extract information from one or more documents. For example, the client device 204 can provide the information server 202 with a copy of a company's annual report, and the information server 202 can analyze the annual report document to extract fields (such as, chief executive officer (CEO), company name, total assets, etc.)."); and generating content including values of the common information element from the plurality of documents based on the query (paragraph [0005], "The system includes a non-transitory computer-readable medium storing computer-executable instructions thereon such that when the instructions are executed, the system is configure to: (a) receive a first document; […]; (c) determine a first set of fields of interest for the first document, wherein the first set of fields of interest are determined via a type of the first document or via a first set of queries for probing the first document; […]; and (e) provide answers to the first set of fields of interest to the client device."). Damodaran does not explicitly teach “identifying a plurality of flexible anchor elements corresponding to the common information element, wherein each of the plurality of flexible anchor elements indicates a different relationship to the common information element,” or “learning, using a machine learning model, a plurality of rules including a first rule and a second rule, wherein the first rule is based on a first relationship type between the information element and a first anchor element of the plurality of anchor elements, and the second rule is based on a second relationship type between the information element and a second anchor element of the plurality of anchor elements, and wherein the first relationship type comprises a position type and the second relationship type comprises a hierarchical structure type,” and thus, Mao is introduced. Mao teaches identifying a plurality of flexible anchor elements corresponding to the common information element, wherein each of the plurality of flexible anchor elements has a different relationship to the common information element (column 2, line 23, "For example, an element list can be extracted from one or more documents based on heuristics or a pattern. One or more documents such as, for example, web pages having particular uniform resource locators (URLs) can include metadata relating to one or more entities. The metadata can include structured information relating to entities. An entity can be, for example, a television program, a music album, any other suitable media item or series thereof, or any combination thereof. An entity can have one or more associated elements such as, for example, an episode, a song, any other suitable element associated with an entity, or any combination thereof."); and learning, using a machine learning model, a plurality of rules including a first rule and a second rule, wherein the first rule is based on a first relationship type between the information element and a first anchor element of the plurality of anchor elements, and the second rule is based on a second relationship type between the information element and a second anchor element of the plurality of anchor elements, and wherein the first relationship type comprises a position type and the second relationship type comprises a hierarchical structure type (column 2, line 23, “For example, an element list can be extracted from one or more documents based on heuristics or a pattern. One or more documents such as, for example, web pages having particular uniform resource locators (URLs) can include metadata relating to one or more entities. The metadata can include structured information relating to entities,” column 7, line 4, “Step 602 includes server 110 extracting an element list using one or more heuristic rules. A heuristic rule may include, for example, a user-specified pattern for extracting an element list, an automatically-generated, e.g., by server 110, pattern for extracting an element list, or both. In some implementations, a rendered web page can include a list of episodes of a particular television program. For example, a user can specify a DOM tree path to element nodes for particular episodes of the television program based on where the element nodes would be expected to be arranged. In a further example, a heuristic rule can include searching in a "list" object, text having a large font size at the top of a web page, and/or other rules,” and column 4, line 65, “In some implementations, server 110 extracts an element list for entity XXX 322, an element list for entity YYY 324, an element list for any other suitable entity, or any combination thereof, using heuristic extraction 320, pattern-based extraction 350, or both.”). Damodaran and Mao are considered analogous because they are each concerned with extracting textual data. Given that the substitution of one known element for another would yield predictable results, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have replaced the domain training of the CDQA models taught by Damodaran with the structural or positional rules as taught by Mao for the purpose of improving model extraction performance, as taught by Damodaran. The combination of Damodaran and Mao does not teach “wherein the machine learning model is trained to locate anchor elements by computing a loss function based on a difference between a predicted anchor and a ground-truth anchor and updating parameters of the machine learning model based on the loss function,” and thus, Dernoncourt is introduced. Dernoncourt teaches wherein the machine learning model is trained to locate anchor elements by computing a loss function based on a difference between a predicted anchor and a ground-truth anchor and updating parameters of the machine learning model based on the loss function (paragraph [0064], "Furthermore, the digital document annotation system 110 can utilize the neural network to analyze the electronic documents and predict digital annotations for the electronic document that correspond to key sentences of the electronic document. Additionally, the digital document annotation system 110 can determine a calculated loss for the neural network by utilizing a loss function to compare the predicted digital annotations from the neural network to the generated ground-truth digital annotations."). Damodaran, Mao and Dernoncourt are considered analogous because they are each concerned with identifying textual data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Damodaran and Mao with the teachings of Dernoncourt for the purpose of improving data extraction accuracy. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 3, Damodaran teaches the method of claim 1, wherein: the content is generated in-situ within a software application that extracts a plurality of common information elements (paragraph [0024], "The information server 202 can include a document classification engine 212, a document pre-preprocessing engine 214, a field of interest extractor 216, and a post processor 218. An engine is a combination of hardware and software configured to perform specific functionality. The information server 202 is configured to receive instructions from the client device 204 for extracting information from one or more documents. For example, the client device 204 can provide the information server 202 with a copy of a company's annual report, and the information server 202 can analyze the annual report document to extract fields (such as, chief executive officer (CEO), company name, total assets, etc.). In some implementations, the information server 202 does not have to know the type of document being examined. Each of the document classification engine 212, the document processing engine 214, the field of interest extractor 216, and the post processing engine 218 identified in FIG. 2 is a combination of hardware and software configured to perform specific functionality as described in the following paragraphs."). In the teachings of Damodaran, the user interfaces with a device and the document processing and reporting are executed in a server environment. No teachings of Damodaran preclude the server processes from being conducted by a single software application, and from the perspective of the user, the request and response may be through the same interface. Regarding claim 4, Damodaran teaches the method of claim 1, wherein: the content is generated by a generative machine learning model (paragraph [0020], "Embodiments of the present disclosure provide a zero-shot task transfer based domain agnostic information extraction framework. A plurality of closed-domain question answering (CDQA) models are leveraged to achieve domain agnostic document extraction. CDQA models are models trained under a specific domain."). Regarding claim 5, Damodaran teaches the method of claim 1, further comprising: generating an additional document corresponding to the plurality of documents based on the generated content (paragraph [0031], "In some implementations, the post processing engine 218 provides the best answers in statement format to the client device 204 in an email, a chatbox, a voice recording, etc."). Regarding claim 6, Damodaran teaches the method of claim 1, further comprising: generating an insight from the analysis, and presenting the insight to a user (paragraph [0030], "The post processing engine 218 cleanses the best answers for each field of interest for displaying on the client device 204. The post processing engine 218 can determine a best form for returning the best answers to the client device 204."). Regarding claim 7, Damodaran teaches the method of claim 1, wherein: each of the flexible anchor elements comprises a plurality of relationships to a corresponding common information element of a plurality of common information elements (paragraph [0036], "Fields of interest can have various properties that can be configured as depicted in FIG. 4. For example, “CEO” field of interest configuration in an annual report can be linked to keywords of synonyms which include “chief executive officer”, “group chief executive”, and “CEO” as shown in item 404. The “CEO” field of interest configuration can also include the field name 402, the field origin 426 of where to look for values for the field, the field source 424 which can be a text source, and the field type 422 which indicates that the CEO field is a field of interest. The field name 402 merely names the field. The field origin 426 can take on values including paragraph, line, page, etc."). Regarding claim 8, Damodaran teaches the method of claim 1, wherein: the query is a natural language query interpreted by a natural language processing (NLP) model (paragraph [0018], "Machine learning approaches to document extraction also present a challenge when trying to sidestep the previously described training problem by combining different machine learning models. For example, a classification model can be used to determine domain, then a named-entity-recognition (NER) model can be used for synonyms of fields of interest, and then a natural language query can be used to extract information pertaining to the fields of interest."). Regarding claim 9, Damodaran teaches the method of claim 8, wherein: the NLP model is a Bidirectional Encoder Representations from Transformers (BERT) or Generative pre-trained transformers (GPT) model (paragraph [0029], "The field of interest extractor 216 utilizes at least two CDQA models when extracting fields from the document. The document elements (e.g., lines, paragraphs, etc.) determined by the document preprocessing engine 214 are candidates for probing with the at least two CDQA models. Examples of CDQA models include Bidirectional Encoder Representations from Transformers (BERT) trained on Stanford Question Answering Dataset (SQuAD), Simple Bi-Directional Attention Flow (BiDAF), ELMo-BIDAF, etc."). Regarding claim 11, Damodaran teaches the non-transitory computer-readable medium of claim 10, wherein the executable instructions further cause the processing device to perform operations comprising: presenting a generated insight to a user based on the content (paragraph [0030], "The post processing engine 218 cleanses the best answers for each field of interest for displaying on the client device 204. The post processing engine 218 can determine a best form for returning the best answers to the client device 204," and paragraph [0031], "In some implementations, the post processing engine 218 provides the best answers in statement format to the client device 204 in an email, a chatbox, a voice recording, etc."). Regarding claim 13, Damodaran teaches the non-transitory computer-readable medium of claim 10, wherein the executable instructions further cause the processing device to perform operations comprising: receiving a natural language query from a user, and identifying one or more common information elements in at least one of the plurality of documents based on the query (paragraph [0018], "Machine learning approaches to document extraction also present a challenge when trying to sidestep the previously described training problem by combining different machine learning models. For example, a classification model can be used to determine domain, then a named-entity-recognition (NER) model can be used for synonyms of fields of interest, and then a natural language query can be used to extract information pertaining to the fields of interest."). Regarding claim 14, Damodaran teaches the non-transitory computer-readable medium of claim 13, wherein the executable instructions further cause the processing device to perform operations comprising: parse the natural language query using a Bidirectional Encoder Representations from Transformers (BERT) or Generative pre-trained transformers (GPT) model (paragraph [0029], "The field of interest extractor 216 utilizes at least two CDQA models when extracting fields from the document. The document elements (e.g., lines, paragraphs, etc.) determined by the document preprocessing engine 214 are candidates for probing with the at least two CDQA models. Examples of CDQA models include Bidirectional Encoder Representations from Transformers (BERT) trained on Stanford Question Answering Dataset (SQuAD), Simple Bi-Directional Attention Flow (BiDAF), ELMo-BIDAF, etc. The field of interest extractor 216 asks questions on the document elements of the preprocessed document using the different CDQA models in order to determine which of the available CDQA models provides a best response for a given question."). Regarding claim 16, Damodaran teaches the system of claim 15, further comprising: clustering a plurality of documents to obtain a document cluster, wherein the plurality of common information elements are obtained from the document cluster (paragraph [0032], "In some implementations, the post processing engine 218 can link different documents together that may be related. For example, in a contracts example, a master service agreement, a statement of work, and an addendum can be preprocessed and elements indexed by the preprocessing engine 214. The field of interest extractor 216 can extract fields like a “First Party” field, a “Second Party” field, and a “Master Service Agreement effective date” field. The post processing engine 218 can link the master service agreement, the statement of work, and the addendum in a knowledge graph if the “First Party” field, the “Second Party” field, and the “Master Service Agreement effective date” field match in the three documents."). Regarding claim 17, Damodaran teaches the system of claim 16, wherein: the plurality of documents includes unstructured documents (paragraph [0002], "The present disclosure relates to information extraction from documents and more specifically to systems and methods for extracting information of interest from unfamiliar documents using zero-shot learning," and paragraph [0003], "Information extraction involves automatically extracting structured information from either structured or unstructured documents. In some cases, the unstructured documents are human language texts where natural language processing (NLP) is applied to extract the structured information."). Regarding claim 18, Damodaran teaches the system of claim 17, further comprising: receiving the document and the query from a user (paragraph [0040], "At step 306, the information server 202 receives a first document. The first document can be received from the client device 204 or from the database 206. The first document is a document to be analyzed by the information server 202 to find values for fields of interest," and paragraph [0018], "Machine learning approaches to document extraction also present a challenge when trying to sidestep the previously described training problem by combining different machine learning models. For example, a classification model can be used to determine domain, then a named-entity-recognition (NER) model can be used for synonyms of fields of interest, and then a natural language query can be used to extract information pertaining to the fields of interest.");analyzing the query using a natural language processor (paragraph [0003], "Information extraction involves automatically extracting structured information from either structured or unstructured documents. In some cases, the unstructured documents are human language texts where natural language processing (NLP) is applied to extract the structured information."); andgenerating an insight in response to the query based on the analysis (paragraph [0030], "The post processing engine 218 cleanses the best answers for each field of interest for displaying on the client device 204. The post processing engine 218 can determine a best form for returning the best answers to the client device 204."). Regarding claim 19, Damodaran teaches the system of claim 18, further comprising: an analysis model trained to automatically perform the analysis and generate the insight (paragraph [0024], "The information server 202 is configured to receive instructions from the client device 204 for extracting information from one or more documents. For example, the client device 204 can provide the information server 202 with a copy of a company's annual report, and the information server 202 can analyze the annual report document to extract fields (such as, chief executive officer (CEO), company name, total assets, etc.). In some implementations, the information server 202 does not have to know the type of document being examined."). Regarding claim 20, Damodaran teaches the system of claim 19, wherein: the at least one common information element of a plurality of common information elements is identified using a machine learning model trained to identify the at least one common information element (paragraph [0020], "Embodiments of the present disclosure provide a zero-shot task transfer based domain agnostic information extraction framework. A plurality of closed-domain question answering (CDQA) models are leveraged to achieve domain agnostic document extraction. CDQA models are models trained under a specific domain," and paragraph [0026], "The field of interest extractor 216 can extract fields within the document using different CDQA models and using different document extraction configurations. The document extraction configurations can include extraction configurations for annual reports, invoices, statements of work, master service agreements, etc."). Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Damodaran, Mao and Dernoncourt as applied to claims 1 and 10 above, further in view of U.S. Patent Application Publication 2003/0212664 to Breining et al. (hereinafter, "Breining"). Regarding claim 2, the combination of Damodaran, Mao and Dernoncourt does not explicitly teach “The method of claim 1, wherein the content comprises a table containing a plurality of data elements,” and thus, Breining is introduced. Breining teaches the content comprises a table containing a plurality of data elements (paragraph [0043], "In the manner described above, the XML data is parsed into a single flat table and queried using conventional techniques to produce the output 60."). Damodaran, Mao, Dernoncourt and Breining are considered analogous because they are each concerned with identifying and extracting data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Damodaran, Mao and Dernoncourt with the teachings of Breining for the purpose of improving data readability. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Regarding claim 12, the combination of Damodaran, Mao and Dernoncourt does not explicitly teach “The non-transitory computer-readable medium of claim 10, wherein the executable instructions further cause the processing device to perform operations comprising: inputting the one or more extracted common information elements into a flat table having a denormalized schema,” however, Breining teaches inputting the one or more extracted common information elements into a flat table having a denormalized schema (paragraph [0043], "In the manner described above, the XML data is parsed into a single flat table and queried using conventional techniques to produce the output 60. The method described next avoids having to flatten the XML data into a single table and thereby repeat information in that table. Instead, the method operates on the XML information "on-the-fly," as a wrapper extracts it from the XML document."). In paragraph [0058], Applicant indicates that, while duplication is permitted in denormalized tables, the claimed invention may avoid duplicating data entries. The teachings of Breining also indicate the creation of a flat representation of data, where duplication is permitted and therefore the schema denormalized, but with efforts to avoid data duplication. Damodaran, Mao, Dernoncourt and Breining are considered analogous because they are each concerned with identifying and extracting data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention, to have modified Damodaran, Mao and Dernoncourt with the teachings of Breining for the purpose of improving data readability. Given that all the claimed elements were known in the prior art, one skilled in the art could have combined the elements by known methods with no change in their respective functions, and the combination would have yielded nothing more than predictable results. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: U.S. Patent 7,428,699 to Kane et al. U.S. Patent 7,644,361 to Wu et al. U.S. Patent 7,840,564 to Holzgrafe et al. U.S. Patent 8,285,074 to Saund et al. U.S. Patent 8,832,655 to Grechanik. U.S. Patent 10,628,525 to Fink et al. U.S. Patent 11,200,073 to Voicu. U.S. Patent Application Publication 2012/0124086 to Song et al. U.S. Patent Application Publication 2015/0106156 to Chang et al. U.S. Patent Application Publication 2018/0025436 to Déjean. U.S. Patent Application Publication 2018/010725 to Schmidt. U.S. Patent Application Publication 2020/0104350 to Allen et al. U.S. Patent Application Publication 2020/0160050 to Bhotika et al. U.S. Patent Application Publication 2021/0248153 to Sirangimoorthy et al. U.S. Patent Application Publication 2022/0043858 to Rinehart et al. U.S. Patent Application Publication 2022/0188509 to Zeng et al. U.S. Patent Application Publication 2022/0207268 to Gligan et al. U.S. Patent Application Publication 2023/0169813 to Muthu et al. U.S. Patent Application Publication 2024/0249543 to Jayaram et al. U.S. Patent Application Publication 2024/0303412 to Evans et al. “One-shot text field labeling using attention and belief propagation for structure information extraction” by Cheng et al. "PDF-to-Text Reanalysis for Linguistic Data Mining” by Goodman et al. “Facilitating conversational interaction in natural language interfaces for visualization” by Mitra et al. “NL4DV: A Toolkit for Generating Analytic Specifications for Data Visualization from Natural Language Queries” by Narechania et al. “Landmarks and Regions: A Robust Approach to Data Extraction” by Parthasarathy et al. “Form2Seq: A Framework for Higher-Order Form Structure Extraction” by Aggarwal et al. “DocStruct: A Multimodal Method to Extract Hierarchy Structure in document for General Form Understanding” by Want et al. “Web Data Extraction Using Textual Anchors” by Pouramini et al. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SEAN T SMITH whose telephone number is (571)272-6643. The examiner can normally be reached Monday - Friday 8:00am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, PIERRE-LOUIS DESIR can be reached at (571) 272-7799. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SEAN THOMAS SMITH/Examiner, Art Unit 2659 /PIERRE LOUIS DESIR/Supervisory Patent Examiner, Art Unit 2659
Read full office action

Prosecution Timeline

Oct 06, 2023
Application Filed
Jul 08, 2025
Non-Final Rejection — §103
Sep 15, 2025
Interview Requested
Sep 25, 2025
Examiner Interview Summary
Sep 25, 2025
Applicant Interview (Telephonic)
Sep 30, 2025
Response Filed
Nov 05, 2025
Final Rejection — §103
Jan 07, 2026
Interview Requested
Jan 13, 2026
Applicant Interview (Telephonic)
Jan 13, 2026
Request for Continued Examination
Jan 13, 2026
Examiner Interview Summary
Jan 26, 2026
Response after Non-Final Action
Mar 16, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602540
LEVERAGING A LARGE LANGUAGE MODEL ENCODER TO EVALUATE PREDICTIVE MODELS
2y 5m to grant Granted Apr 14, 2026
Patent 12530534
SYSTEM AND METHOD FOR GENERATING STRUCTURED SEMANTIC ANNOTATIONS FROM UNSTRUCTURED DOCUMENT
2y 5m to grant Granted Jan 20, 2026
Study what changed to get past this examiner. Based on 2 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
99%
With Interview (+33.3%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 6 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month