Prosecution Insights
Last updated: April 19, 2026
Application No. 18/807,347

GENERATIVE ARTIFICIAL INTELLIGENCE SYSTEM AND METHOD FOR DIGITAL COMMUNICATIONS

Non-Final OA §102§Other
Filed
Aug 16, 2024
Examiner
WONG, LINDA
Art Unit
2655
Tech Center
2600 — Communications
Assignee
Open Text Holdings Inc.
OA Round
1 (Non-Final)
85%
Grant Probability
Favorable
1-2
OA Rounds
3y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 85% — above average
85%
Career Allow Rate
602 granted / 709 resolved
+22.9% vs TC avg
Strong +16% interview lift
Without
With
+15.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
17 currently pending
Career history
726
Total Applications
across all art units

Statute-Specific Performance

§101
7.2%
-32.8% vs TC avg
§103
44.5%
+4.5% vs TC avg
§102
22.3%
-17.7% vs TC avg
§112
16.5%
-23.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 709 resolved cases

Office Action

§102 §Other
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Drawings The drawings were received on 10/30/2024. These drawings are accepted. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-20 is/are rejected under 35 U.S.C. 102a1 as being anticipated by Tupakula et al (US Publication No.: 20220067077). Claim 1, Tupakula et al discloses accessing a document design for a multi-channel document (Fig. 2 shows document design), the document design comprising an object (Fig. 2, label 237 as template/scheme) and a semantically named variable (label 228 as questions including semantically named variable); populating the object of the document design with artificial intelligence generated (AI-generated) content (Fig. 2, label 216 as generated content by the machine learning model, label 220. Label 224 outputs extracted content from the data generated from label 224. Such extracted content is used to populate the schema/template. (paragraph 33), populating the object further comprising: receiving, based on a user interaction with a user interface, an indication to generate content for the object of the document design (Fig. 2, label 204a-204D as content for the object of the document design received from sources 201a-201d. Paragraph 30 discloses 201a-3 different website or sources, where such sites are user interaction based sites. For example, 201c are publicly available websites, 201b are private websites for minimal subscription fee, etc. Such data is for populating template/scheme of label 236.); determining, from the document design, the semantically named variable (Fig. 2, label 228 are questions such as “who won?” or “who are participants?” includes semantically named variable); generating a request to a generative AI model (Fig. 2, label 228, 212), the request comprising: a context, the context comprising the semantically named variable (Fig. 2, label 228 as questions or context comprising semantically named variable.); and a prompt to cause the generative AI model to generate text (Fig. 2, label 212 as a prompt or input data to cause the machine learning model to extract data relevant to label 228); inputting the request to the generative AI model (Fig. 2, label 228,212 are inputted into label 220,216.); and receiving a response to the request from the generative AI model (Fig. 2, output from label 236), the response to the request comprising AI-generated text that includes the semantically named variable (Fig. 2, label 224 are extracted content from the data generated from 220,216 relevant to the questions or semantically named variable. Depending on the content from label 220,216,224, such content can include semantically named variable.); and storing the AI-generated text to the object (Fig. 2, label 242, paragraph 33 discloses the extracted content (AI generated text) populated into a template/scheme (object) may then be stored to the curated data repository 242.); and packaging the object, including the AI-generated text, as part of a document (Fig. 2, label 268 as the user interface. An example is shown in Fig. 1, label 116. Fig. 6, label 632 shows a user interface and paragraph 48 discloses 632 markup language based document.). Claim 2, Tupakula et al discloses displaying a set of text in the user interface (Fig. 6, label 632, set of text, label 648); receiving an input via the user interface (Fig. 6, label 610 as the user interface receiving an input), the input indicating a selected text selected from the set of text displayed in the user interface (Fig. 6, label 616 as text selected such as user-based search terms.); and including the selected text in the prompt (Fig. 2, label 228 is a dataset of questions. Fig. 6, label 616 as the selected text or user question, wherein such text is used to generate curated data from repository 624. Such repository is also shown in Fig. 2, label 242. Fig. 2, label 228 is shown as part of the prompt or request to the learning model 220,216, wherein the user based question can be part of the question dataset and included in the prompt or request to 220,216.). Claim 3, Tupakula et al discloses receiving, based on user interaction with the user interface, an indication of an operation to perform with respect to the selected text (Fig. 6 shows user interaction performed with selected text, 616. Fig. 2 shows the operation to generating curated data used to output data to the user interface. Example Fig. 6, label 632.); and automatically generating the prompt based on the operation to be performed (Fig. 2,6 shows generation of prompt based on query or question to be answered (operation to be performed.). Claim 4, Tupakula et al discloses wherein the operation is to reword the selected text (Fig. 4, label 413 as the question dataset, wherein the question “who was the winner?” is reworded or replaced with <outcome(?)>.). Claim 5, Tupakula et al discloses wherein the set of text is stored as a first content value of the object (Fig. 4, label 412, Fig. 2, label 228, wherein outcome(?) is a first content value of the object (Template) shown at label 428. Paragraph 45 discloses 410 (filled in version of 413) is stored or otherwise associated with schema 428.) and wherein storing the AI-generated text to the object (Fig. 4, label 408 outputs 410. Paragraph 45 discloses storing 410.) comprises storing a variation to the object (Fig. 4, label 410 as a variation of the template (object) 428 where such variation is data for the scheme. Paragraph 45 discloses storing 410.), the variation comprising the AI-generated text (Fig. 4, label 410 with winner = lumberjack as the ai generated text.). Claim 6, Tupakula et al discloses identifying the semantically named variable in the AI-generated text (Fig. 4, label 410 shows semantically named variable as winner, where 410 as the AI generated text.); accessing a sample value for the semantically named variable (Fig. 4, label winner as a sample value, ID1 from 432.); substituting, in the AI-generated text, the semantically named variable with the sample value for the semantically named variable (Fig. 4, label 428 includes winners with ID1 indicating replacing or substituting AI generated text of the semantically named variable, winner=lumberjack, with the sample value.); and displaying a preview in the user interface using the AI-generated text, the preview having the semantically named variable substituted with the sample value (Fig. 4, label 428 as the schema. Fig. 2, label 268 as the user interface generated based on stored curated data such as scheme shown in Fig. 4, label 428. Fig. 2, label 256 as the display displaying 268. Depending on the schema and curated data, the schema of Fig. 4 can be displayed to the user.). Claim 7, Tupakula et al discloses wherein the object is a text object (Fig. 2, label template/schema as a text object.). Clam 8, Tupakula et al discloses wherein the object is a rule (Fig. 2, label template/scheme as a rule or guideline to be populated.). Claim 9, Tupakula et al discloses wherein at least a portion of the prompt is received from a user via the user interface (Fig. 6, label 616 as the user input question, Fig. 2, label 228 as the dataset of questions such as 616.). Claim 10, Tupakula et al discloses maintaining a data store storing a template and a semantically named variable associated with the template (Fig. 2, label 242 includes template/scheme populated with extracted content, wherein the content can include a semantically named variable.), the template defining a layout for a plurality of objects (Fig. 2, label template/scheme. Fig. 4, label 428 as an example.); receiving, based on a user interaction with a user interface, an indication to generate content for a selected object design (Fig. 2, label 204a-204D as content for the object of the document design received from sources 201a-201d. Paragraph 30 discloses 201a-3 different website or sources, where such sites are user interaction based sites. For example, 201c are publicly available websites, 201b are private websites for minimal subscription fee, etc. Such data is for populating template/scheme of label 236.), the selected object included in the plurality of objects (Fig. 2, label template/scheme includes a plurality of objects to be populated by extracted content.); generating a request to a generative AI model (Fig. 2, label 228, 212), the request comprising: a context, the context comprising the semantically named variable (Fig. 2, label 228 as questions or context comprising semantically named variable.); and a prompt to cause the generative AI model to generate text (Fig. 2, label 212 as a prompt or input data to cause the machine learning model to extract data relevant to label 228); inputting the request to the generative AI model (Fig. 2, label 228,212 are inputted into label 220,216.); receiving a response to the request from the generative AI model (Fig. 2, output from label 236), the response to the request comprising AI-generated text that includes the semantically named variable (Fig. 2, label 224 are extracted content from the data generated from 220,216 relevant to the questions or semantically named variable. Depending on the content from label 220,216,224, such content can include semantically named variable.); and storing the AI-generated text to the selected object (Fig. 2, label 242, paragraph 33 discloses the extracted content (AI generated text) populated into a template/scheme (object) may then be stored to the curated data repository 242.); and generating a document that includes the template Fig. 2, label 268 as the user interface. An example is shown in Fig. 1, label 116. Fig. 6, label 632 shows a user interface and paragraph 48 discloses 632 markup language based document.). Claim 11 recites similar limitations as claim 2 and is rejected on the same grounds as claim 2. Claim 12 recites similar limitations as claim 3 and is rejected on the same grounds as claim 3. Claim 13 recites similar limitations as claim 4 and is rejected on the same grounds as claim 4. Claim 14 recites similar limitations as claim 5 and is rejected on the same grounds as claim 5. Claim 15 recites similar limitations as claim 6 and is rejected on the same grounds as claim 6. Claim 16 recites similar limitations as claim 7 and is rejected on the same grounds as claim 7. Claim 17 recites similar limitations as claim 8 and is rejected on the same grounds as claim 8. Claim 18 recites similar limitations as claim 9 and is rejected on the same grounds as claim 9. Claim 19, Tupakula et al discloses a data store, the data store storing a document design for a multi-channel document (Fig. 2, label 236), the document design for the multi-channel document comprising: a page template defining a layout for a plurality of objects (Fig. 2, label 236, Fig. 4, label 428); a set of variables (Fig. 4, label 410,413,428 shows a set of variables); an artificial intelligence (AI) model (Fig. 2, label machine learning model); a production server coupled to a plurality of communications channels (Fig. 2, label 264, 260); a back-end system (Fig. 2. Fig. 10) comprising: a processor coupled to the data store (Fig. 10, label 1002, 1025,1009,1010. Fig. 2, label 242 as the data store.); a memory coupled to the processor (Fig. 10, label 1009,1010,1002), the memory storing a set of instructions executable by the processor (paragraph 6,63), the set of instructions comprising instructions for: accessing a selected object selected from the plurality of objects (Fig. 2, label template/schema, Fig. 4, label 428 as an example of a template with a plurality of objects where a selected object is filled (ex. Outcome (?).); populating the selected object with artificial intelligence generated (AI-generated) content (Fig. 2, label 216 as generated content by the machine learning model, label 220. Label 224 outputs extracted content from the data generated from label 224. Such extracted content is used to populate the schema/template. (paragraph 33), populating the selected object further comprising: receiving, based on a user interaction with a user interface, an indication to generate content for the object of the document design (Fig. 2, label 204a-204D as content for the object of the document design received from sources 201a-201d. Paragraph 30 discloses 201a-3 different website or sources, where such sites are user interaction based sites. For example, 201c are publicly available websites, 201b are private websites for minimal subscription fee, etc. Such data is for populating template/scheme of label 236.); determining, from the document design, the semantically named variable (Fig. 2, label 228 are questions such as “who won?” or “who are participants?” includes semantically named variable); generating a request to a generative AI model (Fig. 2, label 228, 212), the request comprising: a context, the context comprising the semantically named variable (Fig. 2, label 228 as questions or context comprising semantically named variable.); and a prompt to cause the generative AI model to generate text (Fig. 2, label 212 as a prompt or input data to cause the machine learning model to extract data relevant to label 228); inputting the request to the generative AI model (Fig. 2, label 228,212 are inputted into label 220,216.); and receiving a response to the request from the generative AI model (Fig. 2, output from label 236), the response to the request comprising AI-generated text that includes the semantically named variable (Fig. 2, label 224 are extracted content from the data generated from 220,216 relevant to the questions or semantically named variable. Depending on the content from label 220,216,224, such content can include semantically named variable.); and storing the AI-generated text to the object (Fig. 2, label 242, paragraph 33 discloses the extracted content (AI generated text) populated into a template/scheme (object) may then be stored to the curated data repository 242.); and inputting the document design to the production server (Fig. 2, label 264,242,236), to generate the multi-channel document (Fig. 2, label 268 as the multi-channel document displayed via a user interface, label 256.). Claim 20, Tupakula et al discloses storing the AI-generated text to the selected object (Fig. 4, label 408 outputs 410. Paragraph 45 discloses storing 410.) comprises storing a variation that comprises the AI generated text to the selected object (Fig. 4, label 410 as a variation of the template (object) 428 where such variation is data for the scheme. Paragraph 45 discloses storing 410. Fig. 4, label 410 with winner = lumberjack as the ai generated text.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LINDA WONG whose telephone number is (571)272-6044. The examiner can normally be reached 9-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Andrew C Flanders can be reached at 571-272-7516. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LINDA WONG/Primary Examiner, Art Unit 2655
Read full office action

Prosecution Timeline

Aug 16, 2024
Application Filed
Mar 21, 2026
Non-Final Rejection — §102, §Other (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596877
COMPUTER-IMPLEMENTED CONTRACT RISK ASSESSMENT PLATFORM LEVERAGING TRANSFORMERS
2y 5m to grant Granted Apr 07, 2026
Patent 12573368
RESIDUAL ADAPTERS FOR FEW-SHOT TEXT-TO-SPEECH SPEAKER ADAPTATION
2y 5m to grant Granted Mar 10, 2026
Patent 12567426
MACHINE LEARNING-BASED KEY GENERATION FOR KEY-GUIDED AUDIO SIGNAL TRANSFORMATION
2y 5m to grant Granted Mar 03, 2026
Patent 12566925
DIALOGUE STATE AWARE DIALOGUE SUMMARIZATION
2y 5m to grant Granted Mar 03, 2026
Patent 12562824
SYSTEMS AND METHODS FOR WIRELESS SIGNAL CONFIGURATION BY A NEURAL NETWORK
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
85%
Grant Probability
99%
With Interview (+15.5%)
3y 0m
Median Time to Grant
Low
PTA Risk
Based on 709 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month