Prosecution Insights
Last updated: April 19, 2026
Application No. 17/670,753

FACILITATING IDENTIFICATION OF SENSITIVE CONTENT

Final Rejection §101§103
Filed
Feb 14, 2022
Examiner
ANSARI, AZAM A
Art Unit
3621
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Adobe Inc.
OA Round
4 (Final)
48%
Grant Probability
Moderate
5-6
OA Rounds
3y 8m
To Grant
98%
With Interview

Examiner Intelligence

Grants 48% of resolved cases
48%
Career Allow Rate
162 granted / 338 resolved
-4.1% vs TC avg
Strong +50% interview lift
Without
With
+49.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
38 currently pending
Career history
376
Total Applications
across all art units

Statute-Specific Performance

§101
34.2%
-5.8% vs TC avg
§103
38.9%
-1.1% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
9.2%
-30.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 338 resolved cases

Office Action

§101 §103
DETAILED ACTION Response to Amendment This action is in response to the response to the amendment filed on 02/17/2026. Claims 8 and 16 have been amended and claim 21-26 have been newly added. Claims 8, 11-19, and 21-26 are pending and currently under consideration for patentability. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Inventorship This application currently names joint inventors. In considering patentability of the claims under pre-AIA 35 U.S.C. 103(a), the examiner presumes that the subject matter of the various claims was commonly owned at the time any inventions covered therein were made absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and invention dates of each claim that was not commonly owned at the time a later invention was made in order for the examiner to consider the applicability of pre-AIA 35 U.S.C. 103(c) and potential pre-AIA 35 U.S.C. 102(e), (f) or (g) prior art under pre-AIA 35 U.S.C. 103(a). Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 8, 11-19, and 21-26 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. The claims are directed to a judicial exception (i.e., a law of nature, natural phenomenon, or abstract idea) without significantly more. Step 1: In a test for patent subject matter eligibility, claims 8, 11-19, and 21-26 are found to be in accordance with Step 1 (see 2019 Revised Patent Subject Matter Eligibility), as they are related to a process, machine, manufacture, or composition of matter. Claims 8, 11-15 recite a non-transitory computer-readable media (Examiner notes that, according to ¶ [0082] of the Applicant’s originally specification, “Computer storage media does not comprise signals per se.”); claims 16-19, 21 recite a system; and claims 22-26 recite a method. When assessed under Step 2A, Prong I, they are found to be directed towards an abstract idea. The rationale for this finding is explained below: Step 2A, Prong I: Under Step 2A, Prong I, claims 8, 16, and 22 are directed to an abstract idea without significantly more, as they all recite a judicial exception. Claims 8, 16, and 22 recite limitations directed to the abstract idea including “determining that a content includes sensitive language; in accordance with publication of the content, identifying audience segment movement that indicates movement of one or more audience members from one audience segment to another audience segment; determining the audience segment movement deviates from an expected audience segment movement; and based on the determination that the content includes sensitive language and the determination that the audience segment movement deviates from the expected audience segment movement, using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith.” These further limitations are not seen as any more than the judicial exception. Claims 8, 16, and 22 recite additional limitations including “via a machine learned model; the machine learned model trained using a set of sensitive topics, the set of sensitive topics including an initial set of sensitive topics expanded using a language model trained to learn embeddings specific to sensitive topics; via a model; and using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith, to use to train the machine learning model.” The claims are considered to be an abstract idea under certain methods of organizing human activity because the claims are directed to commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations) and managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions) such as refining sensitive topics or weights based on a determination that contain includes sensitive language and audience segment movement deviation. The claims are also considered to be an abstract idea under mental processes because the claims are directed to concepts performed in the human mind (including an observation, evaluation, judgment, opinion) such as receiving data (i.e. content for which a sensitivity determination is desired); determining/identifying data (i.e. that a content includes sensitive language, audience segment movement that indicates movement of one or more audience members from one audience segment to another audience segment, and the audience segment movement deviates from an expected audience segment movement); based on the determination, using an indication/notification (i.e. that the content is potentially sensitive) to refine sensitive topics or weights. Therefore, under Step 2A, Prong I, claims 8, 16, and 22 are directed towards an abstract idea. Step 2A, Prong II: Step 2A, Prong II is to determine whether any claim recites any additional element that integrate the judicial exception (abstract idea) into a practical application. Claims 8, 16, and 22 recite additional limitations including “via a machine learned model; the machine learned model trained using a set of sensitive topics, the set of sensitive topics including an initial set of sensitive topics expanded using a language model trained to learn embeddings specific to sensitive topics; via a model; and using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith, to use to train the machine learning model.” These limitations are seen as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, alone, and in combination, these additional elements are seen as using a computer or tool to perform an abstract idea, adding insignificant-extra-solution activity to the judicial exception. They do no more than link the judicial exception to a particular technological environment or field of use, i.e. machine learning model and how it's trained, and therefore do not integrate the abstract idea into a practical application. The courts decided that although the additional elements did limit the use of the abstract idea, the court explained that this type of limitation merely confines the use of the abstract idea to a particular technological environment and this fails to add an inventive concept to the claims (See Affinity Labs of Texas v. DirecTV, LLC,). Under Step 2A, Prong II, these claims remain directed towards an abstract idea. Step 2B: Claims 8, 16, and 22 recite additional limitations including “via a machine learned model; the machine learned model trained using a set of sensitive topics, the set of sensitive topics including an initial set of sensitive topics expanded using a language model trained to learn embeddings specific to sensitive topics; via a model; and using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith, to use to train the machine learning model.” These limitations do not integrate the judicial exception (abstract idea) into a practical application because of the analysis provided in Step 2A, Prong II. The recitation of the machine learned model or model is described in apply it manner (i.e. via) and recitation of how the machine learned model is trained is described at a high level which is why these additional limitations are analyzed in Step 2A, Prong II. Furthermore, Examiner would like to note that merely training a machine learning model with data (i.e. set of sensitive topics including an initial set of sensitive topics expanded using a language model trained to learn embeddings specific to sensitive topics), using the machine learning model to determine data (i.e. if the content has potentially sensitive language), and retraining the machine learning model based on data (i.e. feedback) in order to improve accuracy is seen as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g) because this is a computer function that is well-understood, routine, and conventional. For example, it has been well-known since at least 1996 that “Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.” (See Wikipedia: Machine learning: The definition "without being explicitly programmed" is often attributed to Arthur Samuel, who coined the term "machine learning" in 1959, but the phrase is not found verbatim in this publication, and may be a paraphrase that appeared later. Confer "Paraphrasing Arthur Samuel (1959), the question is: How can computers learn to solve problems without being explicitly programmed?" in Koza, John R.; Bennett, Forrest H.; Andre, David; Keane, Martin A. (1996). Automated Design of Both the Topology and Sizing of Analog Electrical Circuits Using Genetic Programming. Artificial Intelligence in Design '96. Springer, Dordrecht. pp. 151–170. doi:10.1007/978-94-009-0279-4_9.”). Furthermore, the limitation of “using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith, to use to train the machine learning model” is exactly the function of a machine learning model. Feedback or updated data is needed to retrain the learning model because the learning model develops and learns as it goes through the iterations. Claims 8, 16, and 22 do not include additional elements or a combination of elements that result in the claims amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements listed amount to no more than mere instructions to apply an exception using a generic computer component. In addition, the applicant’s specifications describe “any type of computing device”, ¶ [0020], for implementing the computer-readable medium, system, and/or machine learning model, which do not amount to significantly more than the abstract idea of itself, which is not enough to transform an abstract idea into eligible subject matter. Furthermore, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. Under Step 2B in a test for patent subject matter eligibility, these claims are not patent eligible. Dependent claims 11-15; 17-19, 21 and 23-26 further recite the computer-readable media, system, and method of claims 8, 16, and 22, respectively. Dependent claims 11-15, 17-19, and 23-26 when analyzed as a whole are held to be patent ineligible under 35 U.S.C. 101 because the additional recited limitation fail to establish that the claims are not directed to an abstract idea: Under Step 2A, Prong I, these additional claims only further narrow the abstract idea set forth in claims 8, 16, and 22. For example, claims 11-15, 17-19, and 23-26 describe the limitations for refining sensitive topics or weights based on a determination that contain includes sensitive language and audience segment movement deviation – which is only further narrowing the scope of the abstract idea recited in the independent claims. Under Step 2A, Prong II, for dependent claims 11-15, 17-19, and 23-26, there are no additional elements introduced. Thus, they do not present integration into a practical application, or amount to significantly more. Under Step 2B, the dependent claims do not include any additional elements that are sufficient to amount to significantly more than the judicial exception. Additionally, there is no improvement in the functioning of the computer or technological field, and there is no transformation of subject matter into a different state. As discussed above with respect to integration of the abstract idea into a practical application, the additional claims do not provide any additional elements that would amount to significantly more than the judicial exception. Under Step 2B, these claims are not patent eligible. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 8, 11, and 13-15 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent 11,358,063 to Ashoori in view of U.S. Patent 11,417,085 to Saraee and in further view of U.S. Publication 2018/0232528 to Williamson. With respect to Claim 8: Ashoori teaches: One or more computer-readable media having a plurality of executable instructions embodied thereon, which, when executed by one or more processors, cause the one or more processors to perform a method comprising (Ashoori: Col. 8 Lines 48-55): obtaining content for which a sensitivity determination [[is desired]] (i.e. receiving content for which sensitivity or inappropriateness is determined) (Ashoori: Col. 5 Lines 6-12 “FIG. 2 is a flow diagram illustrating a method in an embodiment. At 202, the method includes receiving multimedia content to be played on a multimedia player device. A profile associated with a target audience can also be received. At 204, the method includes determining that the multimedia content contains audience-inappropriate content.”); determining, using a machine learning model, that the content, or a portion thereof, includes subject matter that is potentially sensitive (i.e. determining if the content is potentially inappropriate or sensitive via machine learning) (Ashoori: Col. 5 Lines 11-16 “At 204, the method includes determining that the multimedia content contains audience-inappropriate content. For example, the multimedia content can be passed to a machine learning classifier such as an artificial neural network trained to classify content propriety. The classifier may output a score associated with appropriateness of the content.”); and based on the determination that the content, or the portion thereof, includes subject matter that is potentially sensitive, causing display, for a user that provided the content, of the content with a corresponding indication that the content is potentially sensitive [[including an indication of a particular sensitive topic identified within the content]] (i.e. based on the determination that the content is potentially sensitive, outputting a score or indicating that the content is potentially sensitive or flagging/tagging the content as potentially sensitive reads on causing a display for the user that the content and particular topic is identified as potentially sensitive and wherein outputting the score via output interface includes display) (Ashoori: Col. 3 Lines 17-43 “At 106, for example, one or more classifiers are applied to the script and/or image of the segment to detect and flag a segment with potential sensitive content. For instance, an image classifier or classification algorithm can be used to classify inappropriate image content on the scene. Similarly, a text classifier or classification algorithm can be used to classify inappropriate language or audio in the segment. For example, a classifier may output a score associated with the content, which score can indicate a degree of appropriateness or inappropriateness. As another example, the classifier may output a score indicating how close the content is for appropriateness corresponding to the requested profile. A segment determined to have inappropriate content can be flagged. For example, the segment can be tagged as having an inappropriate script ( e.g., text, caption, audio), and/or inappropriate image and/or video. For instance, a segment content which has a score that exceeds a general threshold score can be flagged as being inappropriate.” Furthermore, as cited in Col. 7 Lines 10-16 “One or more hardware processors 302 may be coupled with interface devices such as a network interface 308 for communicating with remote systems, for example, via a network, and an input/output interface 310 for communicating with input and/or output devices such as a keyboard, mouse, display, and/or others.”). Ashoori does not explicitly disclose obtaining content for which a sensitivity determination is desired. However, Saraee further discloses obtaining content for which a sensitivity determination is desired (i.e. author’s potential request includes determining what about the content will alienate or cause fatigue to readers) (Saraee: Col. 44 Lines 1-13 “For example, if the selected potential request was "How many times should I post content today?", the system may provide a numerical recommended aspect response instruction to the user to post content as the unique author, for example, four (4) times today. The system's determination of the recommended aspect can be based on activity data that indicates aspects of other content authored by or interacted with by other authors. For example, the system may use activity data to determine that four is an optimal number of times to post content in a day in order to receive the most interactions with the content without alienating authors or causing fatigue with posts by the unique author.” Furthermore, as cited in Col. 84 Lines 57-63 “For example, if the audience has been shown an ad of a red apple four times per day, the system 1000 may determine that the audience has been fatigued by red apples due to the high frequency with which red apples appeared in other content items displayed to the audience. As a result, the system 1000 may determine that the next transformation of this content item should de-emphasize or eliminate the prominent red attribute.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Saraee’s obtaining content for which a sensitivity determination is desired to Ashoori’s based on the determination that the content, or the portion thereof, includes subject matter that is potentially sensitive, causing display, for a user that provided the content, of the content with a corresponding indication that the content is potentially sensitive including an indication of a particular sensitive topic identified within the content. One of ordinary skill in the art would have been motivated to do because “The purpose of such a transformation would be to "refresh" the content item for the users with the intent of increasing the performance of the content in the audience.” (Saraee: Col. 85 Lines 1-3). Ashoori and Saraee do not explicitly disclose based on the determination that the content, or the portion thereof, includes subject matter that is potentially sensitive, causing display, for a user that provided the content, of the content with a corresponding indication that the content is potentially sensitive including an indication of a particular sensitive topic identified within the content, and using the content indicated as potentially sensitive and the indication of the particular sensitive topic to refine the set of sensitive topics, or weights associated therewith, to further train the machine learning model. However, Williamson further discloses: based on the determination that the content, or the portion thereof, includes subject matter that is potentially sensitive, causing display, for a user that provided the content, of the content with a corresponding indication that the content is potentially sensitive including an indication of a particular sensitive topic identified within the content (i.e. displaying report of content that is potentially sensitive and labels or sensitive type classification) (Williamson: ¶¶ [0103] [0104] “The data classification reporting module 114 displays a coordinate identifier 802, a sensitive data type classification 804, a confidence value 806, and various metrics 808 in each row of the sensitive data list. The coordinate identifier 802 may uniquely identify the subsection of the input data sources at which sensitive data was detected by the data classifier 108 as well as the type of sensitive data that was detected there. The subsection may be indicated by the name of the location at which it is stored as well as the label for the subsection. For example, it may be the name of a database table…The sensitive data type classification 804 entry indicates the type of sensitive data that was detected. Note that a subsection (e.g., a table) of the input data sources may store multiple sensitive data types. The confidence value 806 indicates the confidence of the system in determining whether the data is sensitive data of the indicated sensitive data type. Finally, the metrics 808 indicate an observed and estimated count of the data portions in the subsection that are sensitive data and of the sensitive data type indicated in the sensitive data type classification 804 column.”), and using the content indicated as potentially sensitive and the indication of the particular sensitive topic to refine the set of sensitive topics, or weights associated therewith, to further train the machine learning model (i.e. using the content terms and meta-labels identified as potentially sensitive to be fed back into the machine learning model in order to refine the model weights according to accuracy with respect to if the content is potentially sensitive or not) (Williamson: ¶¶ [0035] [0036] “The data classifier 108 may further determine that data is sensitive using reference table matching. The data classifier 108 may store various reference tables that include lists of potentially sensitive data, such as common names of persons, common terms in addresses (e.g., country codes, common street names), product names, medical conditions, and so on. The data classifier 108 may match data portions in the data received from the data pre-processor 106 with the elements in the reference tables to see if it can find a match. If a match is found, the data classifier 108 may determine that the data portion is sensitive…The data classifier 108 may also determine that data is sensitive using machine learning algorithms. The data classifier 108 trains a machine learning model, such as a multilayer perceptron or convolutional neural network, on data known to be sensitive data. Features may first be extracted from the data using an N-gram (e.g., a bigram) model and these features input into the machine learning model for training. After training, the machine learning model will be able to determine (with a confidence level) whether data is sensitive or not. The trained machine learning model may be verified using a verification dataset composed of real world customer data, and the error rate analyzed. The machine learning model may be further improved during live operation by user feedback.” Furthermore, as cited in ¶ [0041] “The classifier refinement engine 110 may continue to train the machine learning module of the data classifier 108 using live data. This may be achieved by utilizing user feedback to determine whether some data portions classified as sensitive are in fact not sensitive and are false positives. The classifier refinement engine 110 may improve the accuracy of the various other methods in the data classifier 108 of determining whether a data portion is sensitive using other forms of user feedback. New patterns may be added to the pattern matching method based on user feedback indicating certain data patterns are sensitive. Logical rules may be modified by the classifier refinement engine 110 based on configuration information provided by a user or by an indication from a user that data portions in certain scenarios are sensitive. Reference tables may be updated using newly received reference data. Contextual matching may also be updated based on new indications of contextual data.” Furthermore, as cited in ¶ [0050] “The metadata analyzer 202 analyzes the metadata of a data portion in the data received from the data of the input data sources 102A-N to determine whether the data portion is sensitive data. The metadata may include, in the case of data pre-processed by the data pre-processor 106, the metadata labels in the common data structure. In the case where a data pre-processor 106 is not used, the metadata includes the metadata labels directly extracted from the input data sources 102A-N. This includes column labels, schema names, database names, tables names, XML tags, filenames, file headers, other tags, file metadata, and so on.”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Williamson’s using the content indicated as potentially sensitive and the indication of the particular sensitive topic to refine the set of sensitive topics, or weights associated therewith, to further train the machine learning model to Ashoori’s based on the determination that the content, or the portion thereof, includes subject matter that is potentially sensitive, causing display, for a user that provided the content, of the content with a corresponding indication that the content is potentially sensitive including an indication of a particular sensitive topic identified within the content. One of ordinary skill in the art would have been motivated to do so in order “to determine a best fit of the determinations made by each component and the correct determination of whether the data is sensitive. By performing this fitting, the significance factor tuner 304 may be able to determine the percentage impact that each component, for each sensitive data type (or other category), has in predicting an accurate determination of whether the data is sensitive data.” (Williamson: ¶ [0078]). With respect to Claim 11: Ashoori teaches: The media of claim 8, wherein the method further comprises: […] based on the determination of the audience segment [[movement]] and the determination that the content, or the portion thereof, includes subject matter that is potentially sensitive, providing at least one sensitivity notification for use in refining the set of sensitive topics, or the weights associated therewith, used to further train the machine learning model (i.e. providing a score and flagging content that is potentially sensitive or inappropriate based on determination) (Ashoori: Col. 3 Lines 17-34 “At 106, for example, one or more classifiers are applied to the script and/or image of the segment to detect and flag a segment with potential sensitive content. For instance, an image classifier or classification algorithm can be used to classify inappropriate image content on the scene. Similarly, a text classifier or classification algorithm can be used to classify inappropriate language or audio in the segment. For example, a classifier may output a score associated with the content, which score can indicate a degree of appropriateness or inappropriateness. As another example, the classifier may output a score indicating how close the content is for appropriateness corresponding to the requested profile. A segment determined to have inappropriate content can be flagged. For example, the segment can be tagged as having an inappropriate script ( e.g., text, caption, audio), and/or inappropriate image and/or video. For instance, a segment content which has a score that exceeds a general threshold score can be flagged as being inappropriate.” Furthermore, as cited in Col. 4 Lines 16-24 “In an embodiment, for example, once a sensitive segment is up to play, before playing that scene, an alternate appropriate script can be generated using generative models. The generated script or text can be passed through the classifier to ensure it passes the sensitive score of the given audience profile. For example, there can be a feedback loop, where the generated script or text can be input to a classifier at 106 and the sensitivity analysis at 108 performed again using the generated script.” Furthermore, as cited in Col. 3 Lines 35-44 “At 108, a sensitivity analysis is conducted on the flagged segment to identify if the segment is inappropriate given the audience profile. For example, a sensitivity analysis is conducted on the classification results. Segments of the media content can be tagged as sensitive or non-sensitive based on the audience profile and the results of classification at 106. For example, the audience profile may indicate, for that particular audience, what type of content is considered inappropriate. The classification at 106 can classify the type of the inappropriate content, for example, by score.”). Ashoori does not explicitly disclose the media of claim 8, wherein the method further comprises: determining audience segment movement in association with publication of the content. However, Saraee further discloses determining audience segment movement in association with publication of the content (i.e. determining if users leave or enter audience/community) (Saraee: Col. 14 Lines 16-38 “For example, an author that previously had not posted about root beer or previously been considered part of a community that enjoys root beer may author an online social media posting regarding their experience trying root beer for the first time and enjoying it. The system may determine a fluctuation in the custom author crowd based on the online social media posting. That is, the community of those who enjoy root beer within the custom author crowd has fluctuated upward. In other embodiments, the system may determine a downward fluctuation. For example, an author may leave an affinity group for root beer hosted by a social media website, which may indicate a downward fluctuation and that the author has left the community of those who enjoy root beer. In another example, a system may determine that an author's failure to author content about root beer over a certain time period is a downward fluctuation and that the author has left the community of those that enjoy root beer. In an illustrative embodiment, the system is monitoring a plurality of authors in a custom author crowd for overall fluctuations based on a fluctuation criteria. That is, the system can determine how many authors in the custom author crowd have joined and/or left a community defined by the fluctuation criteria.”); and Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Saraee’s determining audience segment movement in association with publication of the content to Ashoori’s providing an indication that the content is potentially sensitive for display to a user that provided the content. One of ordinary skill in the art would have been motivated to do because “The purpose of such a transformation would be to "refresh" the content item for the users with the intent of increasing the performance of the content in the audience.” (Saraee: Col. 85 Lines 1-3). With respect to Claim 13: Ashoori teaches: The media of claim 8, wherein the machine learning model outputs a probability associated with the potential sensitivity (i.e. learning model outputs a score associated with a probability of potential sensitivity or appropriateness) (Ashoori: Col. 3 Lines 17-34 “At 106, for example, one or more classifiers are applied to the script and/or image of the segment to detect and flag a segment with potential sensitive content. For instance, an image classifier or classification algorithm can be used to classify inappropriate image content on the scene. Similarly, a text classifier or classification algorithm can be used to classify inappropriate language or audio in the segment. For example, a classifier may output a score associated with the content, which score can indicate a degree of appropriateness or inappropriateness. As another example, the classifier may output a score indicating how close the content is for appropriateness corresponding to the requested profile. A segment determined to have inappropriate content can be flagged. For example, the segment can be tagged as having an inappropriate script ( e.g., text, caption, audio), and/or inappropriate image and/or video. For instance, a segment content which has a score that exceeds a general threshold score can be flagged as being inappropriate.”). With respect to Claim 14: Ashoori teaches: The media of claim 8, wherein the indication that the content is potentially sensitive includes an indication of a particular sensitive topic identified within the content (i.e. identifying audio, visual or text segments within topic that may be potential sensitive or inappropriate) (Ashoori: Col. 3 Lines 57-64 “At 110, if it is determined that the script of a segment is not appropriate for the given profile but the audio visual scene does not include sensitive content, the content of the media is altered for a replacement script for that segment. If the script of the segment is flagged as inappropriate, a new audience-appropriate script is generated. For example, a generative adversarial network (GAN) can be implemented to generate new audience-appropriate script.” Furthermore, as cited in Col. 4 Lines 49-57 “If it is determined that the script or text does not include inappropriate content for the target audience, but contains visual content ( e.g., imagery or video) which is classified as being sensitive for that target audience, the system can generate a new imagery for that scene. For example, if the actor is not dressed appropriately, the imagery of the scene can be updated using a deep learning technology, generative models, and/or text to image translation to fix the dressing of the actor.”). With respect to Claim 15: Ashoori does not explicitly disclose the media of claim 8, wherein the content comprises advertising or marketing material. However, Saraee further discloses wherein the content comprises advertising or marketing material (Saraee: Col. 37 Lines 58-66 “In an operation 515, an engagement and/or content item campaign is executed by or in conjunction with the system disclosed herein. This may be running a content item, posting sponsored content online, sending out print media, running a commercial, tweeting something from an official account, prioritizing particular content on a social networking web site, retweeting a post, or any other sort of engagement or content item campaign that can be executed online or offline”). Therefore, it would have been obvious to one of ordinary skill in the art, at the time the invention was made, to add Saraee’s content comprises advertising or marketing material to Ashoori’s providing an indication that the content is potentially sensitive for display to a user that provided the content. One of ordinary skill in the art would have been motivated to do because “The purpose of such a transformation would be to "refresh" the content item for the users with the intent of increasing the performance of the content in the audience.” (Saraee: Col. 85 Lines 1-3). Allowable Subject Matter Claims 12, 16-19, and 21-26 are allowable over the prior art. With respect to Independent Claim 16: A computing system comprising: a processor; and a non-transitory computer-readable medium having stored thereon instructions that when executed by the processor, cause the processor to perform operations including: determining, via a machine learned model, that a content includes sensitive language, the machine learned model trained using a set of sensitive topics, the set of sensitive topics including an initial set of sensitive topics expanded using a language model trained to learn embeddings specific to sensitive topics; in accordance with publication of the content, identifying, via a model, audience segment movement that indicates movement of one or more audience members from one audience segment to another audience segment; determining the audience segment movement deviates from an expected audience segment movement; and based on the determination that the content includes sensitive language and the determination that the audience segment movement deviates from the expected audience segment movement, using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith, to use to train the machine learning model. With respect to Independent Claim 22: A computer-implemented method comprising: determining, via a machine learned model, that a content includes sensitive language, the machine learned model trained using a set of sensitive topics, the set of sensitive topics including an initial set of sensitive topics expanded using a language model trained to learn embeddings specific to sensitive topics; in accordance with publication of the content, identifying, via a model, audience segment movement that indicates movement of one or more audience members from one audience segment to another audience segment; determining the audience segment movement deviates from an expected audience segment movement; and based on the determination that the content includes sensitive language and the determination that the audience segment movement deviates from the expected audience segment movement, using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith, to use to train the machine learning model. The prior art of Ashoori, Saraee, and Williamson do not disclose – “based on the determination that the content includes sensitive language and the determination that the audience segment movement deviates from the expected audience segment movement, using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith, to use to train the machine learning model”. The Ashoori and Williamson references do not disclose an audience segment movement or deviation from an expected audience segment movement. The Saraee reference, though discloses an audience segment movement or deviation from an expected audience segment movement, does not disclose based on the determination that the content includes sensitive language AND the determination that the audience segment movement deviates from the expected audience segment movement, using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith, to use to train the machine learning model. This limitation is found to be novel over the prior art. Therefore, no art rejection was applied to independent claim 16 with dependent claims 17-19, 21 and independent claim 22 with dependent claims 23-26. Same reasons for novelty are also applied for dependent claim 12. Response to Arguments Applicant’s arguments see pages 9-13 of the Remarks disclosed, filed on 02/17/2026, with respect to the 35 U.S.C. § 101 rejection(s) of claim(s) 8 and 11-19 have been considered but are not persuasive: The Applicant asserts “For example, among other things, the claimed expansion of an initial set of sensitive topics using a language model trained to learn embeddings specific to sensitive topics is not a mental process. Learning embeddings specific to sensitive topics requires high-dimensional vector representations generated by a trained language model and cannot practically be performed in the human mind. This embedding-based expansion produces a more robust and computationally structured training dataset, improving the technical operation of the machine learned classifier. The claim is therefore directed to a specific improvement in how a machine learning model is constructed and trained, not to a generalized concept. Further, the claim recites identifying, via a model, audience segment movement upon publication of content and determining that such movement deviates from an expected audience segment movement. This requires maintaining a computational model of expected audience segment transitions and performing deviation analysis based on observed behavioral data generated within a networked content delivery environment. The determination of deviation from an expected movement is a structured comparison operation that relies on system-generated behavioral metrics, not a mere observation or mental evaluation. Importantly, the claim recites a feedback mechanism in which the sensitive- language determination and/or the deviation determination is used to refine the set of sensitive topics or weights associated therewith for subsequent model training. This refinement alters internal model parameters and changes the technical behavior of the machine learned model going forward. As such, the claim improves the functioning of the machine learning system itself by dynamically adapting its training data and weighting structure based on real-world deployment signals. This is a specific technological solution to the technical problem of static and inefficient content filtering systems that fail to adapt to changing sensitivity conditions.” The Examiner respectfully disagrees. Claims 8, 16, and 22 recite additional limitations including “via a machine learned model; the machine learned model trained using a set of sensitive topics, the set of sensitive topics including an initial set of sensitive topics expanded using a language model trained to learn embeddings specific to sensitive topics; via a model; and using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith, to use to train the machine learning model.” These limitations are seen as adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea - see MPEP 2106.05(f). Accordingly, alone, and in combination, these additional elements are seen as using a computer or tool to perform an abstract idea, adding insignificant-extra-solution activity to the judicial exception. They do no more than link the judicial exception to a particular technological environment or field of use, i.e. machine learning model and how it's trained, and therefore do not integrate the abstract idea into a practical application. The recitation of the machine learned model or model is described in apply it manner (i.e. via) and recitation of how the machine learned model is trained is described at a high level which is why these additional limitations are analyzed in Step 2A, Prong II. Furthermore, Examiner would like to note that merely training a machine learning model with data (i.e. set of sensitive topics including an initial set of sensitive topics expanded using a language model trained to learn embeddings specific to sensitive topics), using the machine learning model to determine data (i.e. if the content has potentially sensitive language), and retraining the machine learning model based on data (i.e. feedback) in order to improve accuracy is seen as adding insignificant extra-solution activity to the judicial exception - see MPEP 2106.05(g) because this is a computer function that is well-understood, routine, and conventional. For example, it has been well-known since at least 1996 that “Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.” (See Wikipedia: Machine learning: The definition "without being explicitly programmed" is often attributed to Arthur Samuel, who coined the term "machine learning" in 1959, but the phrase is not found verbatim in this publication, and may be a paraphrase that appeared later. Confer "Paraphrasing Arthur Samuel (1959), the question is: How can computers learn to solve problems without being explicitly programmed?" in Koza, John R.; Bennett, Forrest H.; Andre, David; Keane, Martin A. (1996). Automated Design of Both the Topology and Sizing of Analog Electrical Circuits Using Genetic Programming. Artificial Intelligence in Design '96. Springer, Dordrecht. pp. 151–170. doi:10.1007/978-94-009-0279-4_9.”). Furthermore, the limitation of “using an indication that the content includes sensitive language and/or an indication that the audience segment movement deviates from the expected audience segment movement as feedback to refine a set of sensitive topics, or weights associated therewith, to use to train the machine learning model” is exactly the function of a machine learning model. Feedback or updated data is needed to retrain the learning model because the learning model develops and learns as it goes through the iterations. The Applicant also asserts “Even assuming that the Office characterizes certain aspects of the claim as involving an abstract idea, the claim integrates any such concept into a practical application under Step 2A, Prong Two. The claimed operations are tied to publication of content in a computing environment, detection of audience-segment transitions, deviation analysis, and structured parameter refinement of a machine learned model. These operations improve the accuracy and adaptability of the model and reduce unnecessary content deployment, thereby improving the efficiency of the underlying computing system. The claim therefore applies any alleged abstract concept in a manner that meaningfully improves a technological process. For example, the Specification describes "to address sensitive content, viewers of the content generally must manually override the sensitive material or rely on the content publisher to appropriately curate topics and content. Such a process is time consuming and oftentimes inaccurate Further, because of the inaccuracies, unnecessary computing resource utilization is used." Para. [0011]. In accordance with embodiments in the present technology, advantageously, "identifying sensitive content and adapting content accordingly can reduce computing resources as more effective content can be displayed to viewers, thereby reducing computing resources needed to present sensitive content." Para. [0012]. Further, [a]dvantageously, expanding the initial set of sensitive topics provides a more robust and comprehensive set of sensitive topics for use in training the machine learning model that predicts sensitive topics. Advantageously, updating the sensitive topics, or weights associated therewith, enables a more dynamic approach to identifying sensitive content as content can have different levels of sensitivities and, further, can change over time Para. [0013]. As such, Applicant requests withdrawal of the 35 U.S.C. 101 rejection.” The Examiner respectfully disagrees. Examiner would like to note that the recitation of the machine learned model or model is described in apply it manner (i.e. via) and recitation of how the machine learned model is trained is described at a high level which is why these additional limitations are analyzed in Step 2A, Prong II. Furthermore, the computing resources being reduced is merely an ancillary effect of the claims because the claims do not recite any computing resources. Also, the content being more accurate or relevant to the user is not an improvement to another technical field or technology and is further directed to the abstract idea of certain methods of organizing human activity. Therefore, the rejection(s) of claim(s) 8, 11-19, and 21-26 under 35 U.S.C. § 101 is maintained above with an updated analysis. Applicant’s arguments see pages 13-15 of the Remarks disclosed, filed on 02/17/2026, with respect to the 35 U.S.C. § 103 rejection(s) of claim(s) 8, 11, and 13-15 over Ashoori in view of Saraee and in further view of Williamson have been considered but are not persuasive. The Applicant asserts “Further, the Williamson reference refers to continuing to train the machine learning module using live data, which may be achieved using user feedback to determine whether some data portions classified as sensitive are in fact not sensitive and are false positives. Williamson, Para. [0041]. Using user feedback, however, is very different from using the content indicated as potentially sensitive and the indication of the particular sensitive topic to refine the set of sensitive topics, or weights associated therewith, to further train the machine learning model, as in claim 8.” The Examiner respectfully disagrees. The Examiner would like to refer the Applicant to ¶¶ [0035] [0036] of the Williamson reference; “The data classifier 108 may further determine that data is sensitive using reference table matching. The data classifier 108 may store various reference tables that include lists of potentially sensitive data, such as common names of persons, common terms in addresses ( e.g., country codes, common street names), product names, medical conditions, and so on. The data classifier 108 may match data portions in the data received from the data pre-processor 106 with the elements in the reference tables to see if it can find a match. If a match is found, the data classifier 108 may determine that the data portion is sensitive…The data classifier 108 may also determine that data is sensitive using machine learning algorithms. The data classifier 108 trains a machine learning model, such as a multilayer perceptron or convolutional neural network, on data known to be sensitive data. Features may first be extracted from the data using an N-gram (e.g., a bigram) model and these features input into the machine learning model for training. After training, the machine learning model will be able to determine (with a confidence level) whether data is sensitive or not. The trained machine learning model may be verified using a verification dataset composed of real world customer data, and the error rate analyzed. The machine learning model may be further improved during live operation by user feedback.” Furthermore, as cited in ¶ [0041] “The classifier refinement engine 110 may continue to train the machine learning module of the data classifier 108 using live data. This may be achieved by utilizing user feedback to determine whether some data portions classified as sensitive are in fact not sensitive and are false positives. The classifier refinement engine 110 may improve the accuracy of the various other methods in the data classifier 108 of determining whether a data portion is sensitive using other forms of user feedback. New patterns may be added to the pattern matching method based on user feedback indicating certain data patterns are sensitive. Logical rules may be modified by the classifier refinement engine 110 based on configuration information provided by a user or by an indication from a user that data portions in certain scenarios are sensitive. Reference tables may be updated using newly received reference data. Contextual matching may also be updated based on new indications of contextual data.” Furthermore, as cited in ¶ [0050] “The metadata analyzer 202 analyzes the metadata of a data portion in the data received from the data of the input data sources 102A-N to determine whether the data portion is sensitive data. The metadata may include, in the case of data pre-processed by the data pre-processor 106, the metadata labels in the common data structure. In the case where a data pre-processor 106 is not used, the metadata includes the metadata labels directly extracted from the input data sources 102A-N. This includes column labels, schema names, database names, tables names, XML tags, filenames, file headers, other tags, file metadata, and so on.” It is clear from the disclosure above that the Williamson reference teaches note that using the content terms or potentially sensitive content and meta-labels or sensitive topic identified as potentially sensitive to be fed back into the machine learning model in order to refine the model weights according to accuracy with respect to if the content is potentially sensitive or not. Therefore, the rejection(s) of claim(s) 8, 11, and 13-15 under 35 U.S.C. § 103 is provided above with updated citations. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The following references are cited to further show the state of the art: U.S. Publication 2011/0219300 to Jindal for disclosing A system and method for evaluating documents for approval or rejection and/or rating. The method comprises comparing the document to one or more criteria determining whether the document contains an element that is substantially identical to one or more of a visual element, an audio element or a textual element that is determined to be displeasing. U.S. Publication 2023/0237180 to Scott for disclosing Systems and methods for linking a screen capture to a user support session are disclosed. The system may receive a screen capture initiation request from a user device. The system may capture a first data object indicative of a first graphical user interface associated with the user device. The system may provide, to the user device, the first graphical user interface for a predetermined period of time. The system may track, by the one or more processors, one or more inputs from the user device that indicate the presence of one or more articles of sensitive information within the graphical user interface. The system may mask the one or more articles of sensitive information within the graphical user interface, generate a second data object indicative of the graphical user interface having the masked articles of sensitive information, and store the second data object in a data repository. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Azam Ansari, whose telephone number is (571) 272-7047. The examiner can normally be reached from Monday to Friday between 8 AM and 4:30 PM. If any attempt to reach the examiner by telephone is unsuccessful, the examiner's supervisor, Waseem Ashraf, can be reached at (571) 270-3948. Another resource that is available to applicants is the Patent Application Information Retrieval (PAIR). Information regarding the status of an application can be obtained from the (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAX. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pairdirect.uspto.gov. Should you have questions on access to the Private PAIR system, please feel free to contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). Applicants are invited to contact the Office to schedule either an in-person or a telephonic interview to discuss and resolve the issues set forth in this Office Action. Although an interview is not required, the Office believes that an interview can be of use to resolve any issues related to a patent application in an efficient and prompt manner. /AZAM A ANSARI/ Primary Examiner, Art Unit 3621 March 19, 2026
Read full office action

Prosecution Timeline

Feb 14, 2022
Application Filed
Mar 22, 2025
Non-Final Rejection — §101, §103
May 29, 2025
Interview Requested
Jun 18, 2025
Examiner Interview Summary
Jun 18, 2025
Response Filed
Jun 18, 2025
Applicant Interview (Telephonic)
Jul 29, 2025
Final Rejection — §101, §103
Oct 31, 2025
Request for Continued Examination
Nov 08, 2025
Response after Non-Final Action
Nov 13, 2025
Examiner Interview (Telephonic)
Nov 13, 2025
Non-Final Rejection — §101, §103
Jan 23, 2026
Interview Requested
Feb 10, 2026
Applicant Interview (Telephonic)
Feb 10, 2026
Examiner Interview Summary
Feb 17, 2026
Response Filed
Mar 19, 2026
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591892
SYSTEM, METHOD, AND COMPUTER PROGRAM PRODUCT FOR EARLY DETECTION OF A MERCHANT DATA BREACH THROUGH MACHINE-LEARNING ANALYSIS
2y 5m to grant Granted Mar 31, 2026
Patent 12499471
AUTOMATICALLY GENERATING A RETAILER-SPECIFIC BRAND PAGE BASED ON A MACHINE LEARNING PREDICTION OF ITEM AVAILABILITY
2y 5m to grant Granted Dec 16, 2025
Patent 12469042
SYSTEM FOR GENERATING A NON-FUNGIBLE TOKEN INCLUDING MUTABLE AND IMMUTABLE ATTRIBUTES AND RELATED METHODS
2y 5m to grant Granted Nov 11, 2025
Patent 12423918
AUGMENTED REALITY IN-APPLICATION ADVERTISEMENTS
2y 5m to grant Granted Sep 23, 2025
Patent 12417468
USER ENGAGEMENT MODELING FOR ENGAGEMENT OPTIMIZATION
2y 5m to grant Granted Sep 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
48%
Grant Probability
98%
With Interview (+49.7%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 338 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month