Prosecution Insights
Last updated: April 19, 2026
Application No. 18/614,566

TRACKING PROVENANCE OF CONTENT FROM A GENERATIVE MODEL

Non-Final OA §101§103
Filed
Mar 22, 2024
Examiner
AYAD, MARIA S
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Microsoft Technology Licensing, LLC
OA Round
1 (Non-Final)
33%
Grant Probability
At Risk
1-2
OA Rounds
3y 10m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
53 granted / 159 resolved
-21.7% vs TC avg
Strong +17% interview lift
Without
With
+17.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
36 currently pending
Career history
195
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 159 resolved cases

Office Action

§101 §103
DETAILED ACTION This action is responsive to the application filed on 3/22/2024 and the preliminary amendment filed on 8/28/2024. Claims 1-50 are pending in this application. Claims 1, 24, and 47-50 are independent claims. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statements (IDS)’s submitted on 3/22/2024 and 6/26/2025 are in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner. Examiner Comments In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Specification The disclosure is objected to because of the following informalities: Replace “Fig. 4B” in [0049] with ““Fig. 7B” Appropriate correction is required. Claim Objections Claims 1, 3, 4, 7-10, 17, 23, 24, 26-28, 30-36, 40, and 46-50 are objected to because of the following informalities: Claims 1, 24, and 47-50, remove the comma before “comprising” in the preamble” Claim 1, remove the comma before “to” at the end of line 4. Claims 3 and 26, replace “the non-model generated other digital content” with “the other non-model generated digital content” Claims 4 and 27, line 2, replace “… review region and the aggregated …” with “… review region; wherein the aggregated …” Claims 4 and 27, line 2, replace “model generated” with “model-generated” in the last 2 lines. Claims 7 and 30, replace “the aggregated document” with “an aggregated document” for proper antecedent basis. Claims 8 and 31, replace “model generated” with “model-generated” in the line before last. Claims 9 and 32, replace “determining, on a per-character, basis” with “determining, on a per-character basis,” Claims 9 and 32, replace “model generated” with “model-generated” in the line before last. Claim 10, replace the dependence to be on claim 9 that recites model-generated textual characters for proper antecedent basis. Claims 17 and 40, replace “the document” with “the digital content” for proper antecedent basis. Claims 23 and 46, replace “model generated” with “model-generated” in the last line. Claim 28, replace the dependence to be on claim 27 that recites a review region for proper antecedent basis. Claims 33 and 34, replace the dependence to be on claim 32 that recites model-generated textual characters for proper antecedent basis. Claim 35, line 2, replace “the processing circuitry and memory are incorporated” with “the processing circuitry is incorporated” for proper antecedent basis. Claim 36, replace the dependence to be on claim 35 that recites a server computing device for proper antecedent basis. Claim 48, remove the comma before “to” at the end of line 3. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-3, 7, 9, 15-17, 19-22, 24-26, 30, 32, 35, 36, 38-40, 42-45, 48, and 49 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. Independent claims 1 and 24 recite “determine … that a textual portion of the digital content is model-generated and originated from a generative model, based on the provenance metadata” which can be recognized as a concept that can be performed in the human mind (note the determination regarding an origin of a portion of text based on observing and evaluating associated metadata, which can be characterized as a judgement based on an observation and/or an evaluation) and thus falls within the Mental Processes” groupings of abstract ideas, but for the recitation of generic computer components (such as “circuitry” (for claim 1) and “memory” (for claims 1 and 24). Examiner notes that the provenance determination module merely refers to a program module. Therefore, the claims recite an abstract idea. This judicial exception is not integrated into a practical application because the above-indicated limitations are merely instructions to implement the abstract idea on a computer and require no more than a generic computer to perform generic computer functions. The additional elements of “receive, via an edit operation, digital content and provenance metadata associated with the digital content” and “output the digital content to a graphical user interface with a visual indication that the textual portion of the digital content is model-generated” amount to no more than adding insignificant extra-solution activity of mere data gathering/input and data display/output. These additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “receiving” and “outputting” steps are further considered well-understood, routine, and conventional in view of the Symantec, TLI, and OIP Techs. court decisions cited in MPEP 2106.05(d)(II) indicating that mere collection or receipt of data over a network as well as presenting information is a well-understood, routine, conventional function when it is claimed in a merely generic manner. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Neither can insignificant extra-solution activity. All of these additional elements as generically claimed are thus considered well-understood, routine, and conventional. Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, these independent claims are not patent eligible. The dependent claims recite additional limitations of determine … that a textual portion of the other digital content is not model-generated (claims 2 and 25), attribute edit operations in an edit history to users who performed the edit operations, and attribute the model-generated content to a user who performed the edit operation ….. (claims 7 and 30), determining, on a per-character basis whether each of a plurality of textual characters in the digital content is model generated (claims 9 and 32), and make the provenance determination by detecting metadata of the digital content indicating AI generation (claims 19 and 42). These above-indicated additional limitations also constitute steps involving judgements based on observation and/or evaluation and fall within the Mental Processes” groupings of abstract ideas. Accordingly, the claims recite an abstract idea. This judicial exception is not integrated into a practical application. Additional elements of “receive other digital content” and “output the other digital content in a visually distinctive manner to the displayed model-generated content” (claims 2 and 25) and “reads attestation data in a manifest that is encrypted and/or signed by a digital signature” (claims 20 and 43) amount to no more than adding insignificant extra-solution activity of mere data gathering/input and data display/output. The additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. Furthermore, additional elements of specifications related to the digital content (claims 3 and 26), the source from which the digital content is received (claims 15 and 38), the provenance metadata (claims 16, 17, 21, 22, 39, 40, 44, and 45), the manifest (claims 20 and 43), the devices generating and assessing the provenance metadata (claims 35 and 36), all amount to no more than adding insignificant extra-solution activity/specifications related to the digital content, the source of the content, the provenance metadata, the manifest, and the sources generating and assessing the provenance metadata. These additional elements also do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “receiving”, “reading” and “outputting” steps are further considered well-understood, routine, and conventional in view of the Symantec, TLI, and OIP Techs. court decisions cited in MPEP 2106.05(d)(II) indicating that mere collection or receipt of data over a network as well as presenting information is a well-understood, routine, conventional function when it is claimed in a merely generic manner. Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Neither can insignificant extra-solution activity. All of these additional elements as generically claimed are thus considered well-understood, routine, and conventional Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, the above-indicated dependent claims are also not patent eligible. Independent claims 48 and 49 recite “determining, on a per-character basis whether each of a plurality of textual characters in the digital content is model generated based on the provenance metadata ..” which can be recognized as a concept that can be performed in the human mind (note the determination. regarding each of a plurality of textual characters, the origin of each character based on associated metadata, which can be characterized as a judgement based on an observation and/or an evaluation) and thus falls within the Mental Processes” groupings of abstract ideas, but for the recitation of generic computer components (such as “circuitry and memory” (for claim 48). Therefore, the claims recite an abstract idea. This judicial exception is not integrated into a practical application because the above-indicated limitations are merely instructions to implement the abstract idea on a computer and require no more than a generic computer to perform generic computer functions. The additional elements of “receiving digital content and provenance metadata associated with the digital content” and “outputting the digital content with indications of the textual characters that are model-generated” amount to no more than adding insignificant extra-solution activity of mere data gathering/input and data display/output. These additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “receiving” and “outputting” steps are further considered well-understood, routine, and conventional in view of the Symantec, TLI, and OIP Techs. court decisions cited in MPEP 2106.05(d)(II) indicating that mere collection or receipt of data over a network as well as presenting information is a well-understood, routine, conventional function when it is claimed in a merely generic manner. So is the Mere instructions to apply an exception using generic computer components cannot provide an inventive concept. Neither can insignificant extra-solution activity. All of these additional elements as generically claimed are thus considered well-understood, routine, and conventional. Therefore, these limitations, taken alone or in combination, do not integrate the abstract idea into a practical application or recite significantly more that the abstract idea. Thus, these independent claims are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 6, 16, 17, 19, 20, 22, 24, 29, 35, 36, 39, 40, 42, 43, and 45 are rejected under 35 U.S.C. 103 as being unpatentable over Velasco, US PGPUB 2013/0018848 A1 (hereinafter as Velasco) in view of Cheruvu et al., US PGPUB 2021/0390446 A1 (hereinafter as Cheruvu). Regarding independent claim 1, Velasco teaches a computing system for tracking provenance of generated content [see the title and fig. 1; see also [0021]] comprising: processing circuitry and associated memory, the processing circuitry being configured to implement a program using portions of the associated memory [see [0021]-[0023] and note the processing apparatus examples (which include processing circuity and memory) and the computer program instructions] to: receive, via an edit operation, digital content and provenance metadata associated with the digital content [note in [0005] and in [0027] the entering of a piece of content into a content management system and note the associated provenance metadata attribute and lineage metadata attribute; see also [0024]]; determine, via a provenance determination module, that a textual portion of the digital content is generated by a certain source, based on the provenance metadata [note from [0005] the determination of the user that changed the content element and from [0013] the determination of authorship; note the document in [0027] as an example of content with textual portions]; and output the digital content to a graphical user interface with a visual indication that the textual portion of the digital content is generated by a certain source [note in [0029] the display of metadata which includes the provenance and lineage metadata attributes associated with each piece of content, as per [0027]]. While Velasco teaches provenance of content as being related to different sources, it does not explicitly teach model-generated content where the source of the content is a generative model. Cheruvu teaches both human-generated and model-generated content where the source of the content is a generative model [see [0021] the author being a human or an ML model; see also [0029]-[0030] indicating a content generator utilizing an ML model; see also [0032]]. Cheruvu further teaches provenance metadata associated with the model-generated content that is used to determine that a portion of the content is model-generated [note in [0034] the digital signature associated with model-generated content; note from [0031] the use of the signature to verify that the author is the ML model]. It would have been obvious to one of ordinary skill in the art having the teachings of the Velasco and Cheruvu before the effective filing date of the claimed invention to modify Velasco’s framework for tracking provenance based on associated provenance metadata by extending it to include model-generated content, as per the teachings of Cheruvu. The motivation for this obvious combination of teachings would be to enable differentiation between human – and model-generated content, as suggested by Cheruvu [again see e.g. [0032]]. Regarding independent claim 24, it is rejected analogously to the rejection of claim 1. Velasco further teaches a method for tracking provenance [see e.g. [0005] and claim 1]. Refer to the rejection of claim 1. Regarding claims 6 and 29, the rejection of claims 1 and 24 are respectively incorporated. Velasco further teaches that the graphical user interface includes a selector configured to receive a user input to selectively enable and disable tracking of content generation source [note in [0025] presenting a selectable option to track provenance and lineage data for content]. Cheruvu further teaches model-generation tracking. Refer to the rejection of the independent claims for cited portions of Cheruvu and motivations to combine. Regarding claims 16 and 39, the rejection of claims 1 and 24 are respectively incorporated. Cheruvu further teaches that the provenance metadata includes a creation date, a model version identifier, and/or prompt used to generate the portion of the digital content [note in [0031] the GUID being a part of the digital signature; note that the GUID serves as an ML model version identifier]. Refer to the rejection of the independent claims for motivations to combine. Regarding claims 17 and 40, the rejection of claims 1 and 24 are respectively incorporated. Cheruvu further teaches that the provenance metadata is encrypted and/or digitally signed and stored in a manifest associated with the digital content [note in [0051] the generation of the digital signature including the GUID and or the ML model ID and note transmitting it along with the digital content to the content consumer system]. Refer to the rejection of the independent claims for motivations to combine. Regarding claims 19 and 42, the rejection of claims 1 and 24 are respectively incorporated. Cheruvu further teaches that the provenance determination module is configured to make the provenance determination by detecting metadata of the digital content indicating AI generation [note in [0031] the extraction of the GUID from the digital signature; note that the GUID serves as an ML model identification and thus indicates AI generation]. Refer to the rejection of the independent claims for motivations to combine. Regarding claims 20 and 43, the rejection of claims 1 and 24 are respectively incorporated. Cheruvu further teaches that the provenance determination module reads attestation data in a manifest that is encrypted and/or signed by a digital signature to verify a source of the model-generated digital content [note in [0051] the generation of the digital signature including the GUID and or the ML model ID and note transmitting it along with the digital content to the content consumer system; again, note in [0031] the extraction of the GUID from the digital signature; note verifying the authenticity of the ML model that generated the content]. Refer to the rejection of the independent claims for motivations to combine. Regarding claims 22 and 45, the rejection of claims 1 and 24 are respectively incorporated. Cheruvu further teaches that the provenance metadata comprises model version, model history, and/or prompts or prompt-related context used to generate the portions of the digital content generated by a generative model [note in [0031] the GUID being a part of the digital signature; note that the GUID serves as an ML model version identifier]. Refer to the rejection of the independent claims for motivations to combine. Regarding claim 35, the rejection of independent claim 24 is incorporated. The previously combined art teaches that: the processing circuitry is incorporated in a client computing device [note in [0020] of Velasco the option of execution entirely on a user’s computer (see the client devices in fig. 1)], and the provenance metadata has been generated by a provenance metadata generation module executed on a server computing device, from which the digital content was received [see from [0023] and fig 1 of Cheruvu that the content generation platform 110 which generates the content and the hash (provenance metadata) can be on a server system]. Refer to the rejection of the independent claims for motivations to combine the cited art. Regarding claim 36, the rejection of claim 35 is incorporated. Cheruvu further teaches that the server computing device is a generative model server [again see from [0023] and fig. 1 the option of a server system for content generation ; see also [0030]], or the server computing device is a model interface server configured to execute a model interface, the model interface being configured to interface with the generative model on the generative model server. Claims 2-5 and 25-28 are rejected under 35 U.S.C. 103 as being unpatentable over Velasco in view of Cheruvu, and further in view of BITTON et al., US PGPUB 2024/0296288 Al (hereinafter as Bitton) and Greenberger et al., US PGPUB 2019/0272071 Al (hereinafter as Greenberger). Regarding claims 2 and 25, the rejection of claims 1 and 24 are respectively incorporated. The previously combined art does not explicitly teach: receiving other digital content; determining, via the provenance determination module, that a textual portion of the other digital content is not model-generated; and outputting the other digital content in a visually distinctive manner to the displayed model-generated content. Bitton teaches: receiving other (additional) digital content [note e.g. in fig. 2 the different functions for receiving different parts of digital content, such as typing, pasting, etc.]; determining, via a provenance determination module, that a textual portion of the other digital content is not model-generated [note the determination whether certain parts of text is human or Ai-generated, as e.g. in [0042]; note the example in [0046] where the second paragraph is determined to be written by a human]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Bitton before the effective filing date of the claimed invention to further modify the framework taught by Velasco and modified by the teachings of Cheruvu to explicitly specify receiving other digital content and determining that a textual portion of the other digital content is not model-generated, as per the teachings of Bitton. The motivation for this obvious combination of teachings would be to enable differentiating different sections or portions of a textual digital content based on human or AI authorship, as suggested by Bitton[see e.g. [0047]]. The previously combined art, still, does not explicitly teach outputting the other digital content in a visually distinctive manner to the displayed model-generated content. Greenberger teaches outputting different portions of content in a visually distinctive manner to other displayed portions depending on the source that generated each portion [see e.g. in [0012] the display of portions generated by each of different contributors in a different assigned color]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Greenberger before the effective filing date of the claimed invention to further apply the teaching of Greenberger to the framework taught by Velasco and modified by the teachings of Cheruvu and Bitton to explicitly specify outputting the other digital content in a visually distinctive manner to the displayed model-generated content. The motivation for this obvious combination of teachings would be to facilitate quick and intuitive differentiation of the different portions based on originating source, as suggested by the example given by Greenberger. Regarding claims 3 and 26, the rejection of claims 2 and 25 are respectively incorporated. Bitton further teaches that model-generated digital content and other non-model-generated digital content are incorporated in an aggregated document [see e.g. [0046] indicating a first paragraph written by an AI program and a second paragraph written by a human in the same document]. Refer to the rejections of claims 2 and 25 for motivations to combine the cited art. Regarding claims 4 and 27, the rejection of claims 3 and 26 are respectively incorporated. Greenberger further teaches a graphical user interface includes a document region and a review region, wherein an aggregated document with generated content from different sources is displayed in the document region and an icon related to each generation source is displayed in the review region adjacent the corresponding generated content [see e.g. fig. 3 with document region 310 and review region 330; note the icons indicating the generation source displayed in the review region adjacent the corresponding content]. It would have been obvious to specify the display of model-generated icon to be displayed adjacent the model-generated content in the aggregated document having model-generated and non-model-generated content taught by Bitton. Refer to the rejections of claims 1 and 2 for motivations to combine the cited art. Regarding claims 5 and 27, the rejection of claims 4 and 27 are respectively incorporated. Greenberger further teaches that a non-model generated icon is displayed in the review region adjacent the non-model generated content [again see fig. 3; note each of the icons indicating a non-model-generation source displayed in the review region adjacent the corresponding content]. Claims 7 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Velasco in view of Cheruvu, as applied to claims 1 and 24 respectively, and further in view of Pividori et al., US PGPUB 2024/0289540 Al (hereinafter as Pividori). Regarding claims 7 and 30, the rejection of claims 1 and 24 are respectively incorporated. The previously combined art does not explicitly teach: that the program includes an attribution tracking module configured to attribute edit operations in an edit history to users who performed the edit operations, and that the attribution tracking module is configured to attribute the model-generated content to a user who performed the edit operation via which the model-generated content was received and incorporated into an aggregated document. Pividori teaches including an attribution tracking module configured to attribute edit operations in an edit history to users who perform the edit operations, and attributing the model-generated content to a user who performed an edit operation via which a model-generated content was received and incorporated into an aggregated document [note in [0031] a scenario in which model-generated content is accepted into a revised document (aggregated document) by a human author and the attribution of the edit operation (by which the model-generated content is received and incorporated into the revised document) to the human author (as an indication of author acceptance)]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Pividori before the effective filing date of the claimed invention to apply the teaching of Pividori to the framework taught by Velasco and modified by the teachings of Cheruvu to explicitly specify that the program includes an attribution tracking module configured to attribute edit operations in an edit history to users who performed the edit operations, and that the attribution tracking module is configured to attribute the model-generated content to a user who performed the edit operation via which the model-generated content was received and incorporated into an aggregated document. The motivation for this obvious combination of teachings would be to create a demarcation that aids in upholding the integrity of authorship attribution providing a clear understanding of the unique roles played by both human editors and AI technologies, as suggested by Pividori [see [0032]]. Claims 8 and 31 are rejected under 35 U.S.C. 103 as being unpatentable over Velasco in view of Cheruvu, as applied to claims 1 and 24 respectively, and further in view of Ziolkowski et al., US Patent No. 12,061,902 Bl (hereinafter as Ziolkowski). Regarding claims 8 and 31, the rejection of claims 1 and 24 are respectively incorporated. The previously combined art does not explicitly teach that the program includes an attribution tracking module configured with a model attribution threshold, the attribution tracking module being configured to remove a model generated attribution of the model-generated content upon determining that the model generated content has been edited to an extent that the model attribution threshold is no longer met. Ziolkowski teaches an attribution tracking module configured with a model attribution threshold, the attribution tracking module being configured to remove a model-generated attribution of the model-generated content upon determining that the model-generated content has been edited to an extent that the model attribution threshold is no longer met [note in col. 13, lines 24-29 and lines 43-50; note the change of attribution from the generative AI to the human upon determining that a certain threshold has been surpassed]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Ziolkowski before the effective filing date of the claimed invention to apply the teaching of Ziolkowski to the framework taught by Velasco and modified by the teachings of Cheruvu to explicitly specify that the program includes an attribution tracking module configured with a model attribution threshold, the attribution tracking module being configured to remove a model generated attribution of the model-generated content upon determining that the model generated content has been edited to an extent that the model attribution threshold is no longer met. The motivation for this obvious combination of teachings would be to enable changing this threshold as a policy set by a user or organization and/or depending on the type of content to enable demonstrating provenance of content to an auditor for compliance purposes defeating allegations of passing AI generated content as one’s own, as suggested by Ziolkowski [see col. 13, lines 30-33 and col. 8, lines 1-6]. Claims 9, 10, 18, 32, 33, 41, 48, and 49 are rejected under 35 U.S.C. 103 as being unpatentable over Velasco in view of Cheruvu, as applied to claim 1 (for claims 9, 10, and 18) and claim 24 (for claims 32, 33, and 41), and further in view of Bhave et al., US Patent No. 12,061,675 Bl (hereinafter as Bhave). Regarding claims 9 and 32, the rejection of claims 1 and 24 are respectively incorporated. Cheruvu further teaches determining that a plurality of textual characters in the digital content is model-generated [note from figs. 4-5 the use of the digital signature to determine that certain content is model-generated; note from [0034] that the content can be plain text]. The previously combined art, however, does not explicitly teach that the determining includes determining, on a per-character basis, whether each of a plurality of textual characters in the digital content is model-generated. Bhave teaches a determination, on a per-character basis, for each of a plurality of textual characters in digital content, of a certain attribute [note e.g. in col. 6, lines 35-55 indicating a conversion of each of characters within a document to a certain character representation based on an attribute of the character which makes determining the attribute on a per-character basis straightforward using the character representation; see also col. 8, lines 26-30]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Bhave before the effective filing date of the claimed invention to apply the teachings of Bhave regarding per-character conversions and determinations based on an attribute of the character to the digital signature taught by Cheruvu utilizing the model-generation as the attribute for each character. The motivation for this obvious combination of teachings would be to allow simple mapping between representations of text characters and corresponding attributes by assigning and encoding attributes at a character-level, as suggested by Bhave [see col. 6, lines 48-55] which would simplify determination of model-generated text as that taught by Cheruvu on a per-character-basis. Regarding claims 10 and 33, the rejection of claims 9 and 32 are respectively incorporated. Cheruvu further teaches provenance metadata that includes indication of an attribute of model-generated textual content [again, note from figs. 4-5 the use of the digital signature to determine that certain content is model-generated; note from [0034] that the content can be plain text]. Bhave further teaches a character encoding that encodes the indication of an attribute of the textual characters on a per-character basis [note e.g. in col. 6, lines 35-55 indicating a conversion of each of characters within a document to a certain character representation based on an attribute of the character; see also col. 8, lines 26-30]. Refer to the rejections of claims 9 and 32 for motivations to combine the cited art. Regarding claims 18 and 41, the rejection of claims 1 and 24 are respectively incorporated. Cheruvu further teaches making the provenance determination that the text originated from a generative model [note from figs. 4-5 the use of the digital signature to determine that certain content is model-generated; note from [0034] that the content can be plain text]. The previously combined art, however, does not explicitly teach that the determining includes make the provenance determination by detecting a character encoding indicative of text originating from a generative model, on a per-character basis. Bhave teaches making a determination by detecting a character encoding indicative of a certain attribute, on a per-character basis [note e.g. in col. 6, lines 35-55 indicating a conversion of each of characters within a document to a certain character representation based on an attribute of the character which makes determining the attribute on a per-character basis straightforward using the character representation; see also col. 8, lines 26-30]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Bhave before the effective filing date of the claimed invention to apply the teachings of Bhave regarding character encoding indicative of a certain attribute, on a per-character basis to the provenance determination that the text originated from a generative model taught by Cheruvu. The motivation for this obvious combination of teachings would be to allow simple mapping between representations of text characters and corresponding attributes by assigning and encoding attributes at a character-level, as suggested by Bhave [see col. 6, lines 48-55] which would simplify determination of model-generated text as that taught by Cheruvu on a per-character-basis. Regarding independent claim 48, Velasco teaches a computing system [see the title and fig. 1; see also [0021]] comprising: processing circuitry and associated memory, the processing circuitry being configured to implement a provenance determination module [see [0021]-[0023] and note the processing apparatus examples (which include processing circuity and memory) and computer program instructions] to: receive digital content and provenance metadata associated with the digital content [note in [0005] and in [0027] the entering of a piece of content into a content management system and note the associated provenance metadata attribute and lineage metadata attribute; see also [0024]]; determine whether a textual portion of the digital content is generated by a certain source, based on the provenance metadata [note from [0005] the determination of the user that changed the content element and from [0013] the determination of authorship; note the document in [0027] as an example of content with textual portions]; and output the digital content with indications of the textual portion that is generated by a certain source [note in [0029] the display of metadata which includes the provenance and lineage metadata attributes associated with each piece of content, as per [0027]]. While Velasco teaches provenance of content as being related to different sources, it does not explicitly teach model-generated content where the source of the content is a generative model. Neither does it teach that the determining includes determining, on a per-character basis, whether each of a plurality of textual characters in the digital content is model-generated based on the provenance metadata. Cheruvu teaches both human-generated and model-generated content where the source of the content is a generative model [see [0021] the author being a human or an ML model; see also [0029]-[0030] indicating a content generator utilizing an ML model; see also [0032]]. Cheruvu further teaches provenance metadata associated with the model-generated content that is used to determine whether a portion of the content is model-generated [note in [0034] the digital signature associated with model-generated content; note from [0031] the use of the signature to verify that the author is the ML model]. It would have been obvious to one of ordinary skill in the art having the teachings of the Velasco and Cheruvu before the effective filing date of the claimed invention to modify Velasco’s framework for tracking provenance based on associated provenance metadata by extending it to include model-generated content, as per the teachings of Cheruvu. The motivation for this obvious combination of teachings would be to enable differentiation between human- and model-generated content, as suggested by Cheruvu [again see e.g. [0032]]. While Cheruvu further teaches determining that a plurality of textual characters in the digital content is model-generated [note from figs. 4-5 the use of the digital signature to determine that certain content is model-generated; note from [0034] that the content can be plain text], the previously combined art, however, still does not explicitly teach that the determining includes determining, on a per-character basis, whether each of a plurality of textual characters in the digital content is model-generated. Bhave teaches a determination, on a per-character basis, whether each of a plurality of textual characters in digital content has a certain attribute [note e.g. in col. 6, lines 35-55 indicating a conversion of each of characters within a document to a certain character representation based on an attribute of the character which makes determining the attribute on a per-character basis straightforward using the character representation; see also col. 8, lines 26-30]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Bhave before the effective filing date of the claimed invention to apply the teachings of Bhave regarding per-character conversions and determinations based on an attribute of the character to the digital signature (provenance metadata) taught by Cheruvu utilizing the model-generation as the attribute for each character. The motivation for this obvious combination of teachings would be to allow simple mapping between representations of text characters and corresponding attributes by assigning and encoding attributes at a character-level, as suggested by Bhave [see col. 6, lines 48-55] which would simplify determination of model-generated text as that taught by Cheruvu on a per-character-basis. Examiner notes that it would have been further obvious to utilize the per-character determinations of whether each of the textual characters is model-generated, as per the combined teachings of Cheruvu and Bhave, to update the indications taught by Velasco and modified by the teachings of Cheruvu to be on a per-character basis as well, so that indications are specifically for textual characters. Regarding independent claim 49, it is rejected analogously to the rejection of claim 48. Velasco further teaches a computerized method for tracking provenance [see e.g. [0005] and claim 1]. Refer to the rejection of claim 48 for further details. Claims 11-13 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Velasco in view of Cheruvu and Bhave, as applied to claim 10 (for claims 11-13) and claim 32 (for claim 34) above, respectively, and further in view of JOYCE, US PGPUB (hereinafter as Joyce). Regarding claims 11 and 34, the rejection of claims 10 and 32 are respectively incorporated. As above, Cheruvu further teaches provenance metadata that includes indication of an attribute of model-generated textual content [again, note from figs. 4-5 the use of the digital signature to determine that certain content is model-generated; note from [0034] that the content can be plain text]. As above, Bhave further teaches an encoding that encodes the indication of an attribute of the textual characters on a per-character basis [note e.g. in col. 6, lines 35-55 indicating a conversion of each of characters within a document to a certain character representation based on an attribute of the character; see also col. 8, lines 26-30]. Refer to the rejections of claims 9 and 32 for motivations to combine the previously combined art. The previously combined art, however, does not explicitly teach that the provenance metadata comprises surrogate character encoding or Unicode code plane encoding indicating the model-generated textual characters. Joyce teaches the use of Unicode code plane encoding for any addition information, including possibly source data [see e.g. [0033]; see also the first 6 lines of [0017]]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Joyce before the effective filing date of the claimed invention to further modify the per-character encoding taught by Bhave to use the Unicode code plane encoding, as per the teachings of Joyce. The motivation for this obvious combination of teachings would be to allow utilizing the block of code points available and permanently reserved in the Unicode Plane for the use of any current application (which would be the encoding of per-character model-related provenance), as suggested by Joyce [again, see [0033]]. Regarding claim 12, the rejection of claim 11 is incorporated. The previously combined art teaches that: the processing circuitry and memory are incorporated in a client computing device [note in [0020] of Velasco the option of execution entirely on a user’s computer (see the client devices in fig. 1)], and the provenance metadata has been generated by a provenance metadata generation module executed on a server computing device, from which the digital content was received [see from [0023] and fig 1 of Cheruvu that the content generation platform 110 which generates the content and the hash (provenance metadata) can be on a server system]. Refer to the rejection of the independent claims for motivations to combine the cited art. Regarding claim 13, the rejection of claim 12 is incorporated. Cheruvu further teaches that the server computing device is a generative model server [again see from [0023] and fig. 1 the option of a server system for content generation; see also [0030]], or the server computing device is a model interface server configured to execute a model interface, the model interface being configured to interface with the generative model on the generative model server. Claims 14, 15, 37, and 38 are rejected under 35 U.S.C. 103 as being unpatentable over Velasco in view of Cheruvu, as applied to claim 1 (for claims 14 and 15) and claim 24 (for claim 37 and 38) and further in view of Bitton. Regarding claims 14 and 37, the rejection of claims 1 and 24 are respectively incorporated. Cheruvu teaches provenance metadata related to model-generated digital content that is used to determine that a portion of the content is model-generated [note in [0034] the digital signature associated with model-generated content; note from [0031] the use of the signature to verify that the author is the ML model]. The previously combined art does not explicitly teach that the program is configured to output the digital content with an indication of the textual portion that is model-generated, at least in part by: formatting the digital content for display using the provenance metadata, the formatted digital content including a visual provenance indication labeling the model-generated digital content, and outputting the formatted digital content including the visual provenance indication to a display. Bitton teaches outputting digital content with an indication of a textual portion that is model-generated, at least in part by: formatting the digital content for display using provenance information, the formatted digital content including a visual provenance indication labeling the model-generated digital content, and outputting the formatted digital content including the visual provenance indication to a display [note in fig. 5 and [0047] the display of text with annotations indicating sections that are indicative of AI authorship; note the determination of provenance information in [0042]]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Bitton, before the effective filing date of the claimed invention, to further modify the framework taught by Velasco and modified by the teachings of Cheruvu to explicitly specify outputting digital content with an indication of a textual portion that is model-generated, at least in part by: formatting the digital content for display using the provenance metadata, the formatted digital content including a visual provenance indication labeling the model-generated digital content, and outputting the formatted digital content including the visual provenance indication to a display, as per the combined teachings of Cheruvu and Bitton. The motivation for this obvious combination of teachings would be to enable differentiating different sections or portions of a textual digital content based on human or AI authorship, as suggested by Bitton [see e.g. [0047]]. Regarding claims 15 and 38, the rejection of claims 1 and 24 are respectively incorporated. The previously combined art does not explicitly teach that the program is configured to receive the digital content from: a copilot interface provided in a productivity application, a browser, a social media application, or a game program executed by the processing circuitry, an instance of a generative model associated with a model interface GUI displayed by the processing circuitry, a model interface associated with the model interface GUI displayed by the processing circuitry, a clipboard program executed by the processing circuitry, or a document. Bitton teaches a program configured to receive digital content from a clipboard program executed by processing circuitry [see [0048] and note the option “pasting” function and using a clipboard]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Bitton, before the effective filing date of the claimed invention, to further modify the framework taught by Velasco and modified by the teachings of Cheruvu to explicitly specify that the program is configured to receive the digital content from a clipboard program executed by the processing circuitry, as per the teachings of Bitton. The motivation for this obvious combination of teachings would be to enable combining text from multiple sources into one document via pasting functions, as suggested by the example given by Bitton [see [0048]]. Claims 21 and 44 are rejected under 35 U.S.C. 103 as being unpatentable over Velasco in view of Cheruvu, as applied to claims 1 and 24 respectively, and further in view of LALLY et al., US PGPUB 2022/0414237 Al (hereinafter as Lally). Regarding claims 21 and 44, the rejection of claims 1 and 24 are respectively incorporated. The previously combined art does not explicitly teach that the provenance metadata includes license information. Lally teaches metadata that includes license information [note in [0056] that metadata of generated AI information includes license information]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Lally before the effective filing date of the claimed invention to modify the provenance metadata taught by Cheruvu to explicitly specify that it includes license information, as per the teachings of Lally. The motivation for this obvious combination of teachings would be to ensure that the information remains under its sources’ control over its lifetime, as suggested by Lally [see [0004]]. Claims 23 and 46 are rejected under 35 U.S.C. 103 as being unpatentable over Velasco in view of Cheruvu, as applied to claims 22 and 45 respectively, and further in view of Han et al., US PGPUB 2024/0086648 Al (hereinafter as Han). Regarding claims 23 and 46, the rejection of claims 22 and 45 are respectively incorporated. The previously combined art does not explicitly teach that the program is configured to display a regeneration option using the model version and the model history, along with the indications of model generated content within the digital content. Han teaches displaying a regeneration option along with the indications of model generated content within digital content [note in fig. 6B, the regeneration option 642 displayed with highlighted content 626F]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Han before the effective filing date of the claimed invention to modify the program taught by Velasco and modified by Cheruvu to explicitly displaying a regeneration option along with the indications of model generated content within digital content, as per the teachings of Han. The motivation for this obvious combination of teachings would be to enable refining model-generated portions that a human my find incorrect or inappropriate, as suggested by Han [see [0123]]. Han further teaches displaying additional items with the regeneration option [see options 644 and 646 on fig. 6B]. Han also teaches a model version and model history [see [0066] indicating a sequence of older and newer version numbers of the model and indicating that the older may still be applied]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Han before the effective filing date of the claimed invention to further substitute the model version and model history taught by Han, for the additional displayed items also taught by Han to reach the recited limitation of displaying a regeneration option using the model version and the model history. It would have been further obvious to utilize this information saved by Han to provide further options that the human user may select to regenerate the indicated portion of content. Because Han teaches other selectable items to display as well as the model version and history data, one of ordinary skill in the art would have been able to make a simple substitution of known selectable options for others to generate the predictive result of allowing a human user to choose specifics of the generative-model for regeneration. See MPEP 2143 I B. Claim 47 is rejected under 35 U.S.C. 103 as being unpatentable over Hailpern et al., US PGPUB 2008/0021922 A1 (hereinafter as Hailpern) in view of Cheruvu and Parasnis et al., US Patent No. 11,809,688 Bl (hereinafter as Parasnis). Regarding independent claim 47, Hailpern teaches a computing system comprising: processing circuitry and associated memory, the processing circuitry [note the system in fig. 7 and see the description in [0128]-[0129] including a CPU, memory, and software modules of instructions stored therein; Examiner notes that the first and second program modules are included in the instructions] being configured to: at a first program module: receive digital content and provenance metadata associated with the digital content [see in fig. 4, step 401 and [0118] receiving an editable object with associated originality-related information; see [0031] for examples of originality-related information]; perform an edit operation on a portion of the digital content to thereby produce an operation output [note in [0031]-[0032] exemplary edit operations on a portion of the editable object; note the example of importing including copy-or-cut-and-pasting]; maintain the provenance metadata associated with the portion of the digital content in the operation output [see in fig. 4, step 404 and related description retrieving the originality info; note in [0014] that the originality-related information is maintained in association with the elements of an editable object]; at a second program module: receive the operation output [again note in [0031]-[0032] exemplary edit operations on a portion of the editable object; note the example of importing including copy-or-cut-and-pasting]; determine that the portion of the digital content is generated by a certain source based on the provenance metadata [note in [0035] identifying originality-related information; see also [0010]]; and output the digital content with indications of the portion that is generated by a certain source [note in fig. 4, steps 402 and 404 as well as related description the presentation of the element and the originality-related info]. Hailpern does not explicitly teach determining whether the portion of the digital content is model-generated based on the provenance metadata nor outputting the digital content with indications of the portion that is model- generated. Neither does it teach that receiving digital content and provenance metadata associated with the digital content is at a model interface GUI of a generative model. Cheruvu teaches model-generated content where the original source of the content is a generative model [see [0021] the author being a human or an ML model; see also [0029]-[0030] indicating a content generator utilizing an ML model; see also [0032]]. Cheruvu further teaches provenance metadata associated with the model-generated content and determining whether the portion of the digital content is model-generated based on the provenance metadata [note in [0034] the digital signature associated with model-generated content; note from [0031] the use of the signature to verify that the author is the ML model]. Cheruvu further teaches transmitting model-generated digital content and provenance metadata associated with it [see step 440 of fig. 4]. It would have been obvious to one of ordinary skill in the art having the teachings of the Hailpern and Cheruvu before the effective filing date of the claimed invention to modify Hailpern’s framework for tracking provenance and outputting portions of content with source indications based on associated provenance metadata by extending it to include received model-generated content, as per the teachings of Cheruvu. The motivation for this obvious combination of teachings would be to enable differentiation between human- and model-generated content, as suggested by Cheruvu [again see e.g. [0032]]. The previously combined art, still, does not explicitly teach that receiving digital content and provenance metadata associated with the digital content is at a model interface GUI of a generative model. Parasnis teaches receiving digital content at a model interface GUI of a generative model [see e.g. fig. 13 showing a generative model interface with an option to copy content; see also related description]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Parasnis before the effective filing date of the claimed invention to further modify Hailpern’s framework which has been modified to include receiving model-generated content and provenance metadata, as per the teachings of Cheruvu, by specifying receiving the content at a model interface GUI of a generative model, as per the teachings of Parasnis. The motivation for this obvious combination of teachings would be to enable customizing the model-generated content through the generative model GUI and exporting it for reusing in other applications by copying to the clipboard, as suggested by the example given by Parasnis [see col. 11, lines 34-35; see also the description field in fig. 13 that the user can use for customized generation]. Claim 50 is rejected under 35 U.S.C. 103 as being unpatentable over Cheruvu in view of Bhave. Regarding independent claim 50, Cheruvu teaches a computerized method [see e.g. fig. 4] comprising: generating provenance metadata associated with digital content where the provenance metadata includes indications whether a plurality of textual characters in the digital content is model-generated [note fig. 4, especially step 430 indicating generating a digital signature for content generated by an ML model; see also [0034]; note from [0034] that the content can be plain text], wherein the provenance metadata includes surrogate character or Unicode code plane encoding and/or metadata of the digital content indicating AI generation [note from [0034] that the signature includes a GUID and an ML model ID; see also [0031]]; and outputting the digital content and provenance metadata associated with the digital content [see step 440 of fig. 4 indicating transmitting the content and the associated digital signature]. While Cheruvu teaches provenance metadata that includes indication of an attribute of model-generated textual content [again, note from figs. 4-5 the use of the digital signature to determine that certain content is model-generated; note from [0034] that the content can be plain text], it does not explicitly teach that the provenance metadata includes a character encoding that encodes the model-generated indication on a per-character basis, and the provenance metadata includes surrogate character or Unicode code plane encoding and/or metadata of the digital content indicating AI generation, on a per-character basis. Bhave teaches a character encoding that encodes the indication of an attribute of the textual characters on a per-character basis [note e.g. in col. 6, lines 35-55 indicating a conversion of each of characters within a document to a certain character representation based on an attribute of the character; see also col. 8, lines 26-30]. It would have been obvious to one of ordinary skill in the art having the teachings of Cheruvu and Bhave before the effective filing date of the claimed invention to apply the teachings of Bhave regarding character encoding on a per-character basis based on an attribute of the character to the provenance metadata (digital signature) indicating AI generation taught by Cheruvu utilizing the model-generation as the attribute for each textual character. The motivation for this obvious combination of teachings would be to allow simple mapping between representations of text characters and corresponding attributes by assigning and encoding attributes at a character-level, as suggested by Bhave [see col. 6, lines 48-55] which would simplify indications of model-generated text as that taught by Cheruvu on a per-character-basis. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Examiner notes from the prior art US 2019/0251165 A1, BACHRACH, which teaches metadata indicating message origin [see fig. 2 and [0036]]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA S AYAD whose telephone number is (571)272-2743. The examiner can normally be reached Monday-Friday, 7:30 am - 4:30 pm. Alt, Friday, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARIA S AYAD/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Mar 22, 2024
Application Filed
Aug 29, 2024
Response after Non-Final Action
Jan 05, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554263
DRONE-ASSISTED VEHICLE EMERGENCY RESPONSE SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12549436
INTERNET OF THINGS CONFIGURATION USING EYE-BASED CONTROLS
2y 5m to grant Granted Feb 10, 2026
Patent 12474181
METHOD FOR GENERATING DIAGRAMMATIC REPRESENTATION OF AREA AND ELECTRONIC DEVICE THEREOF
2y 5m to grant Granted Nov 18, 2025
Patent 12443856
DECISION INTELLIGENCE SYSTEM AND METHOD
2y 5m to grant Granted Oct 14, 2025
Patent 12443272
Proactive Actions Based on Audio and Body Movement
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
33%
Grant Probability
50%
With Interview (+17.1%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 159 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month