Prosecution Insights
Last updated: April 19, 2026
Application No. 18/300,730

METHODS AND SYSTEMS FOR PROVIDING CONTENT

Final Rejection §101§102§103§112
Filed
Apr 14, 2023
Examiner
MERCHANT, SHAHID R
Art Unit
3684
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Comcast Cable Communications LLC
OA Round
2 (Final)
29%
Grant Probability
At Risk
3-4
OA Rounds
4y 9m
To Grant
54%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
39 granted / 136 resolved
-23.3% vs TC avg
Strong +25% interview lift
Without
With
+25.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 9m
Avg Prosecution
15 currently pending
Career history
151
Total Applications
across all art units

Statute-Specific Performance

§101
26.8%
-13.2% vs TC avg
§103
37.3%
-2.7% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
22.4%
-17.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 136 resolved cases

Office Action

§101 §102 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims Claims 1-20 are currently pending and have been considered below. Claims 7 and 17 have been amended. Response to Arguments Applicant's arguments filed September 19, 2025 have been fully considered but they are not persuasive. Applicant argues on page 6 the rejection under 35 U.S.C. 112(b). Specifically, Applicant argues “For the purposes of 35 U.S.C. § 112(b), Applicant need not define, in the claims, a particular step of "determining...secondary content." Applicant submits the present claims satisfy the requirements of 35 U.S.C. § 112(b). Further, the specification is replete with examples of what a how, and when, any "secondary content" referenced in the claims may be determined to carry out the claimed methods. As such, Applicant submits the requirements of 35 U.S.C. § 112(b) are satisfied and the rejection should be withdrawn.” Examiner disagrees. Applicant has not provided any reason as to why a particular step need not be defined and simply states that the present claims satisfy the requirements of 35 U.S.C. § 112(b). Next, Applicant simply states that the “specification is replete with examples of what a how, and when, any "secondary content" referenced in the claims may be determined to carry out the claimed methods” without providing any paragraph numbers or page numbers or drawing numbers for support. Simply stating something without support is not persuasive. Accordingly, the 35 U.S.C. 112(b) rejection is maintained. Next, Applicant argues on pages 6-13 the rejection under 35 U.S.C. 101. Specifically, Applicant argues on page 7 the claims do not recite a mental process. Examiner did not cite mental process as the grouping of abstract ideas in the last office action, however the claims could be grouped under mental process as well. While Applicant argues the human mind cannot “determine a sentiment score per se” or “cause output,” the relevant inquiry looks to whether the steps, as claimed at a high level, can be performed in the mind or with pen and paper (MPEP § 2106.04(a)(2)). For example, a doctor may ask a patient on a scale from 1 to 10, tell me your pain level (“1” being no pain and “10” being extreme pain). The human mind is very capable of determining/ processing a score (1 to 10) and writing it down or outputting it. Determining sentiment, assigning a numeric rating, comparing ratings, and choosing which content to show can be performed mentally or via pen and paper. The recitation of “score” or “cause output” does not, in itself, take the claim outside the mental process grouping. See CyberSource v. Retail Decisions, 654 F.3d 1366, 1372 (Fed. Cir. 2011). Next, on pages 8-9, Applicant argues the claims do not recite a method of organizing human activity. Specifically, Applicant argues that “There is no such contractual relationship nor commercial interaction in the present claims. Further, there is literally no product advertised in the present claims. The Office Action impermissibly reaches beyond the bounds of the judicial exception that prohibits claiming aspects of "advertising" that relate to "contractual relationships" or "commercial interactions," to reject any and all claims that use the word "advertisement," completely ignoring the specific technical nature of the claims. The present claims are technical in nature and the recitation of "secondary content" or "advertisements" as part of such technical method cannot render an otherwise patent eligible claim ineligible. The technical nature of the claim removes it from the realm of abstract ideas and renders the claim eligible as the claim merely recites "secondary content" as an object upon which the technology operates and does not recite a contractual relationship or commercial interaction related to advertising. Examiner disagrees. The claims are directed to the abstract idea of collecting information, analyzing data (computing sentiment scores), comparing scores, and selecting content to present—i.e., data analysis and content selection. Courts have consistently held such information processing and results-oriented selection to be abstract. See, e.g., Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1353–56 (Fed. Cir. 2016) (collecting, analyzing, displaying results abstract); SAP America, Inc. v. InvestPic, LLC, 898 F.3d 1161, 1163–68 (Fed. Cir. 2018) (statistical analysis/modeling abstract); PersonalWeb Techs., LLC v. Google LLC, 8 F.4th 1310, 1316–18 (Fed. Cir. 2021) (data labeling/identifying abstract). Applicant argues on page 9 that “There is no such contractual relationship nor commercial interaction in the present claims. Further, there is literally no product advertised in the present claims.” However, Applicant’s specification is replete with the subject matter of advertisements and how they are an integral part of content (see several paragraphs below). [0001] Content may include primary content and secondary content. The subject of the primary content and the secondary content may be very different. This can cause conflicts when, for example, a user, while watching a romantic movie, receives a funny advertisement. The advertisement may, in itself, be unobjectionable to the user, but the conflicting sentiments between the movie and the advertisement can create feelings of discomfort or other undesirable emotions in a user and perhaps an aversion to what is being advertised. Thus, this is a need for more information about the secondary content surrounding primary content and more control over secondary content placement. [0002] It is to be understood that both the following general description and the following detailed description are exemplary and explanatory only and are not restrictive. In some aspects, provided are methods and systems for targeted content delivery. Content, such as video and/or audio, can be analyzed to determine associated emotions or sentiments. Additional content, such as advertisements, can be determined for output before or after the content based on a similarity or difference between the emotions/sentiments associated with the content and the emotions/sentiments associated with the additional content. Additional advantages will be set forth in part in the description which follows or may be learned by practice. The advantages will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. [0029] The secondary content source 104 may be configured to send content (e.g., video, audio, movies, television, games, applications, data, etc.) to one or more devices such as the media device 120, the gateway device 122, the network component 129, the first access point 123, the mobile device 124, and/or a second access point 125. The secondary content source 104 may comprise, for example, a content server such as an advertisement server. The secondary content source 104 may be configured to send secondary content. Secondary content can comprise, for example, advertisements (interactive and/or non-interactive) and/or supplemental content such as behind-the-scenes footage or other related content, supplemental features (applications and/or interfaces) such as transactional applications for shopping and/or gaming applications, metadata, combinations thereof, and the like. The metadata may comprise, for example, demographic data, pricing data, timing data, configuration data, combinations thereof, and the like. For example, the configuration data may include formatting data and other data related to delivering and/or outputting the secondary content. [0046] FIG. 2B shows an example of sentiment alignment between primary content 210 and secondary content 211. For example, the methods described herein may determine a first sentiment score associated with the primary content 210 and a second sentiment score associated with the secondary content 211. For example, the first sentiment score may be determined based on OCR detection of “gunman,” “kills,” and “hospital” in the caption of the news clip. Death in a news clip may make viewers feel contemplative or depressed. The secondary content 211, an advertisement for life insurance, may elicit similar feelings in viewers and thus create a smooth emotional transition between the primary content 210 and the secondary content 211. [0048] FIG. 2D shows an example where primary content and secondary content are output at the same time. In FIG. 2C, the primary content 201 is a news broadcast covering the Russian invasion of Ukraine and the second content 201 is an advertisement for APPLEBEE'S®. This scenario may cause viewers to experience conflicting feelings (e.g., sadness or fear about the war and perhaps guilt about dining out while a war is going on). [0051] FIG. 3B shows an example system 310. The system may comprise a computing device 311. The computing device 311 may be configured to receive primary content (e.g., a movie, a show, a news segment, combinations thereof, and the like). The computing device 311 may be configured to receive secondary content (e.g., one or more advertisements, one or more banner ads, supplemental content, one or more applications, combinations thereof, and the like). The system 310 may be configured to determine one or more content alignment scores. The one or more alignment scores may indicate a relationship between a first sentiment score associated with the primary content and one or more second sentiment scores associated with one or more pieces of secondary content. For example, the first sentiment score associated with the primary content may be determined based on a concatenation of a multidimensional valence vector associated with the primary content and a multidimensional intensity vector associated with the primary content. Also, there is nothing technical in nature about secondary content or advertisements as these are part of the abstract idea. On page 10, under section B. Applicant submits the claims integrate the alleged judicial exception into a practical application (Prong Two of Step 2A), there does not appear to be any clear arguments as to why the claims are integrated into a practical application. Next, Applicant argues on pages 10-12 that the claims recite additional elements and that they provide an improvement to technology. Applicant specifically argues that “The claims contain additional elements, for example, at least the various “sentiment scores”, “intensity scores,” and steps such as “causing, based on a relationship between the first sentiment score and a second sentiment score of the one or more second sentiment scores, a segment of secondary content to be output,” as additional elements.” Examiner disagrees. Sentiment scores and intensity scores and outputting content based on the scores are not additional elements. They are parts of the abstract idea. The claims do not recite an improvement to computer functionality or another technology; they recite using sentiment scores and intensity scores to select content for output. There is no particular machine (no computer hardware recited), no transformation, no improved data structure, no specific algorithm improving computational efficiency, memory, or network functioning. See MPEP § 2106.04(d); Enfish, LLC v. Microsoft Corp., 822 F.3d 1327 (Fed. Cir. 2016) (specific self-referential table); McRO, Inc. v. Bandai Namco, 837 F.3d 1299 (Fed. Cir. 2016) (specific rules improving animation). Applicant’s assertion of “improved content delivery” is an intangible business outcome (aligning ad sentiment with primary content) rather than a technological improvement to the computer itself. See Customedia Techs., LLC v. Dish Network Corp., 951 F.3d 1359, 1365–67 (Fed. Cir. 2020) (improved advertising is not a technical improvement); Two-Way Media Ltd. v. Comcast Cable, 874 F.3d 1329, 1339–41 (Fed. Cir. 2017). On pages 12-13, under section C. The present claims recite additional elements that amount to significantly more than any alleged judicial exception (Step 2B), Applicant argues that claims 1-20 recite a series of steps and specific elements that lead to a useful result. Further the present claims recite unconventional features not previously practiced by the prior art. These features are not merely generic recitations of computer hardware or steps. Instead, they are unconventional features, which when considered as a whole amount to "significantly more" than any alleged judicial exception. Examiner disagrees. First, there is no computer hardware or additional elements per se cited in claims 1-20. Next, there is no showing in the specification that the claimed “sentiment score,” “intensity information,” or “relationship” computations are unconventional. Further, the claims do not include additional elements amounting to “significantly more” than the abstract idea. The steps—receiving information, computing scores, comparing scores, selecting content, and outputting—are well-understood, routine, and conventional in the field of content recommendation and advertising technology. See Electric Power Group, LLC v. Alstom S.A., 830 F.3d 1350, 1353–56 (Fed. Cir. 2016) (collecting, analyzing, displaying results abstract). Examiner notes once again that there are no additional elements cited in claims 1-20. Accordingly, the 35 U.S.C. 101 rejection is maintained. Next, Applicant argues the rejection of claims 1 and 3-5 under 35 U.S.C. 102 and the Kalra reference on pages 13-16. Applicant argues that Kalra does not teach determining, based on reaction information associated with primary content and intensity information associated with the primary content, a first sentiment score associated with the primary content. Examiner disagrees. Kalra teaches this limitation as seen below: Column 8, Lines 5-48, FIG. 3 is a diagram depicting illustrative aspects of an exemplary content recommendation engine, consistent with exemplary aspects of at least some embodiments of the present disclosure. As shown in the exemplary architecture of FIG. 3 , an illustrative content recommendation engine 300 operates a customer/content preference predictor 310 and a content analyzer 320 to generate recommended personalized content 330 (e.g., message, part thereof, etc.), and which may also be based also on human curated content 340. The customer predictor 310 operates to receive input such as audience data 312, content data 314, and message type data 316, which is in turn provided to a prediction model 318 to generate customer preferences 319. In some embodiments, customer preferences 319 may include information regarding a customer's likelihood of engagement (e.g., propensity prediction, etc.), a customer's preferred time/schedule for accessing his or her messages, a customer's preferred channel of contacts (e.g., emails, SMS, etc.), and the like, as explained in more detail elsewhere herein. The prediction model 318 may be trained by user data from the same customer, a group of similar customers, content personalization engine generated preferences (e.g., feedback data), or other suitable aggregation of customer data. Content analyzer 320 operates to receive input such as sample data/content 322, which is in turn provided to a content model 324 to generate content scores 326. In some embodiments, the sample data/content 322 may include and/or involve a portion of human curated content 340. In other embodiments, the sample data/content 322 can be the entire collection of the user data collected in the past. Further, the content model 324 may be configured for scoring content along a plurality of dimensions, for example, dimensions of various meta data such as emotion, sentiments, intents, and the like. The content model 324 may be trained by user data from the same customer, a group of similar customers, content recommendation engine generated content (e.g., feedback data, NLG data), or other suitable aggregation of customer data. As shown herein, based on the intelligence gleaned by the audience predicator 310 and the content analyzer 320, the content recommendation engine 300 recommends personalized items of content such as human curated content 340 (e.g., service provider agent selection of a marketing campaign) to generate personalized subject line/content 330. Column 9, Lines 37-39, FIG. 5 is a flowchart illustrating one illustrative process associated with content recommendation, consistent with exemplary aspects of certain embodiments of the present disclosure. Referring to FIG. 5 , an illustrative content recommendation process 500 may comprise: receiving first data comprising content preferences associated with an audience, at 502; receiving second data comprising an initial digital message being proposed for transmission to the audience, at 504; generating a recommendation data set based on the first data and the second data, at 506; determining, by a natural language machine learning model, suggested content for the audience, at 508; and providing the suggested content for dissemination to the audience, at 510. Further, the content recommendation process 500 may be carried out, in whole or in part, online, e.g. via a content recommendation engine and/or it may be carried out by in conjunction with various messaging application functionality, described above. As seen in the passages above, Kalra clearly teaches primary content and intensity information associated with the primary content, a first sentiment score associated with the primary content. Next Applicant argues that Kalra does not teach determining, based on the first sentiment score, one or more segments of secondary content, wherein each segment … is associated with one or more second sentiment scores. Examiner disagrees. Kalra teaches this limitation as seen below: Abstract Systems and methods associated with providing content personalization are disclosed. In one embodiment, an exemplary method may comprise receiving first data including content preferences associated with an audience, receiving second data including an initial digital message being proposed for transmission to the audience, generating a recommendation data set based at least in part on the first data and the second data, wherein the recommendation data set identifies at least one recommended content type and at least one recommended message type, determining via a natural language generation machine learning model suggested content for the audience, and providing the suggested content for dissemination to the audience. Column 1, Lines 55-67 and Column 2, lines 1-21 In some embodiments, the present disclosure provides various exemplary technically improved computer-implemented methods including steps such as: receiving, by at least one computer, first data comprising content preferences associated with an audience; receiving, by the at least one computer, second data comprising an initial digital message being proposed for transmission to the audience; generating, by the at least one computer, a recommendation data set based at least in part on the first data and the second data, wherein the recommendation data set identifies at least one recommended content type and at least one recommended message type; determining, by the at least one computer, via a natural language generation (NLG) machine learning model, suggested content for the audience, e.g., via: analyzing a repository of content messages associated with a plurality of sentiments based at least in part on the recommended data set to identify one or more correlations between the recommended data set and one or more content messages, wherein the repository of content messages are categorized into a plurality of message sentiment categories based on scored conversion events associated with the plurality of sentiments; and generating the suggested content based on the one or more correlations, wherein the suggested content comprises one or more suggested content messages having one or both of a suggested message type and a suggested message language; and providing, by the at least one computer, via an application program interface (API), the suggested content for dissemination to the audience. Column 9, Lines 14-49 As to the output, the output provider 420 of the content preference predictor 400 is configured to provide a set of one or more predictions on customer preferences. As shown herein, such predicted preferences include predictions of content type 422, predictions of audience/context 424, and predictions of message type 426. Based on the predicted customer preferences (e.g., customer intelligence), impactful meta data (e.g., emotion content) as well as impactful content (e.g., an item of content that most likely is to engage a customer, an item of content that is of high interest to a customer at a particular context) 428 can be determined, and subsequently used to verify the degree of relevancy, engagement, impactfulness of an existing item of content (e.g., a digital message to be sent to a customer), or generate a relevant and engaging message. As shown herein FIG. 4 , a matrix 425 of predicted content types is displayed to show representative content types, e.g., FIG. 4 showing non-limited examples thereof, such as urgency, trust, joy, achievement, and excitement. FIG. 5 is a flowchart illustrating one illustrative process associated with content recommendation, consistent with exemplary aspects of certain embodiments of the present disclosure. Referring to FIG. 5 , an illustrative content recommendation process 500 may comprise: receiving first data comprising content preferences associated with an audience, at 502; receiving second data comprising an initial digital message being proposed for transmission to the audience, at 504; generating a recommendation data set based on the first data and the second data, at 506; determining, by a natural language machine learning model, suggested content for the audience, at 508; and providing the suggested content for dissemination to the audience, at 510. Further, the content recommendation process 500 may be carried out, in whole or in part, online, e.g. via a content recommendation engine and/or it may be carried out by in conjunction with various messaging application functionality, described above. As seen in the passages above, Kalra clearly teaches one or more segments of secondary content. First data is classified and scored along a plurality of dimensions, for example, dimensions of various meta data such as emotion, sentiments, intents, and a further recommendation data set (one or more segments of secondary content) is further generated based at least in part on the first data and the second data (causing, based on a relationship between the first sentiment score and a second sentiment score of the one or more second sentiment scores), wherein this recommendation is matched to the emotional and sentiment valuation assessed on the first (and/or second ) data. Next, Applicant argues that Kalra does not teach causing, based on a relationship between the first sentiment score and a second sentiment score … a segment of secondary content to be output. Examiner disagrees. Kalra teaches this limitation as seen below: Empowered with various intelligence regarding the creation and transmission of a message, upon receiving a trigger signal 792, a messaging engine 720 may be configured to transmit personalized messages 796, 798, and 799 to customers at respective personalized communication channels, or not to transmit a message (lack of a message) 794 to a customer. For example, the message 796 may be transmitted to a customer as a SMS message to the mobile device of the customer. Both message 798 and message 799 may be transmitted to a respective customer as an email message to an respective email address associated with the customers. However, as indicated by the varying patterns of the hatching lines, the email message 798 is personalized differently than the email message 799, depending on the semantic scores of the respective customers. Applicant argues on page 17 (claim 8) that Burger does not teach determining, a first sentiment score associated with a first portion of primary content and a second sentiment score associated with a second portion of primary content. Examiner disagrees. Burger teaches limitation above in cited paragraph 50 as seen below. It will be appreciated that, just as an advertising campaign including several advertisements related to an overall brand concept may be managed by selecting particular advertisements in view of the targeted viewer's emotional status, individual advertisements may be similarly customized to the targeted viewer. For example, a first portion of an advertisement may be provided to the targeted viewer and the emotional response of the targeted viewer may be monitored. In response, one of a plurality of second portions of that advertisement may be selected to be presented to the targeted viewer based on the emotional response to the first portion. Thus, if the targeted viewer's emotional expression was judged favorable to a dramatic mountain vista shown in an opening scene of the first portion advertisement, a second portion including additional rugged mountain scenery may be selected instead of an alternative second portion that includes a dynamic office environment. Thus, in some embodiments, 232 may include, at 244, selecting a first portion of the particular advertisement to be sent to the targeted viewer. Next, Applicant argues on page 17 that the combination of Burger and Beisel do not teach causing the secondary content to be output between the first portion of primary content and the second portion of primary content. Burger teaches this limitation is paragraphs 58-59 as seen below. As one example, in an embodiment where the targeted viewer's emotional response to a first portion of the particular advertisement is used, during display of the advertisement, to select a second portion of the advertisement, method 200 may include, at 262, selecting a second portion of the particular advertisement based on the targeted viewer's emotional response to the first portion of the particular advertisement, and, at 264, sending the second portion of the particular advertisement to another computing device to be output for display to the targeted viewer. At 266 of such embodiments, method 200 includes outputting the second portion of the particular advertisement for display. As another example of how the targeted viewer's emotional response to the particular advertisement may be used to provide additional advertising content, in some embodiments, method 200 may include, at 268, selecting a related advertisement to send to the targeted viewer, sending the related advertisement to another computing device at 270, and, at 272, outputting the related advertisement for display to the targeted viewer. Any advertisement suitably related to the particular advertisement may be provided as the related advertisement. Suitable relationships include, but are not limited to, contextual relationships among the advertisements, such as advertising campaign and/or concept relationships, brand relationships, geographic relationships, and goods and services relationships. In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, both Burger and Beisel are analogous prior arts. Both deal with providing media or advertisements based on emotions or viewer feedback. Examiner has properly provided motivation to combine the references as seen below: It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to select the sentiment of the second portion/content in Burger, based on being different than the sentiment of the first portion/content as taught by Beisel, since this expansion based on user sentiment, enables content providers to determine whether the content is having an intended effect (see Beisel, ¶1:66-2:6). Moreover, this implementation would be a simple substitution of one known element of the second portion/content sentiment being based on the first portion/content sentiment, for another, and the substitution produces no new and unexpected result. Applicant argues on page 20 (Claim 15), that Kalra teaches “reaction information.” Examiner disagrees. Chopdekar was brought in to teach reaction information as seen below: Chopdekar discloses: (see at least Chopdekar, ¶14, “In some embodiments, data mining and machine learning tools and techniques are used to manage information, including analyzing content (one or more segments of primary content) and determining sentiment (reaction information). For example, data mining and machine learning may be used to determine communication information and sentiment information for various pieces of content including text (based on text data), audio (based on audio data), and/or visual data, to analyze the information, including comparing the sentiment information, and to manage information. Machine learning based models can have information about synonyms and antonyms associated with sentiments, for example, so they can identify synonymous and antonymous sentiments, as well as levels of synonymy (e.g., that a word may be more synonymous to one word than another). In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, both Kalra and Chopdekar are analogous prior arts. Both deal with providing media or advertisements based on emotions or viewer feedback. Examiner has properly provided motivation to combine the references as seen below: It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to expand the sentiment scores in Kalra, with the comparable sentiment scores of Chopdekar, since this modification would be a simple substitution of one known sentiment score determination element (the intensity feature and the based on audio data feature taught by Chopdekar), for another (the sentiment scoring of Kalra), and the substitution produces no new and unexpected result. Moreover, this modification implementation enhances the scoring functionality of Kalra. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. Claims 15-20, are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention. Claim 15 recites the limitation: “based on a correlation between the average sentiment score and a second sentiment score associated with secondary content, causing output of the secondary content”; however, this claim does not recite an active determination step of “secondary content”, prior to this claimed “causing output” step, therefore making the claim language vague and ambiguous. Dependent claims 16-20 inherit the deficiencies of the independent claim 15. For purposes of examining these claim, prior art teaching a determination step is used. Nevertheless, this claim is indefinite, and appropriate clarification, indication of support and correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20, are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more. Step 1: In the instant case, claims 1-7, 8-14 and 15-20, are directed to a method, therefore the claims are directed to statutory categories of invention. Step 2A- Prong 1: Independent claim 1 comprises steps of: determining, a first sentiment score associated with the primary content (based on reaction information associated with primary content and intensity information associated with the primary content, a first sentiment score associated with the primary content); determining, one or more segments of secondary content associated with one or more second sentiment scores (based on the first sentiment score, or a a relationship); and outputting a segment of secondary content. Independent claim 8 comprises steps of: determining, a first sentiment score associated with a first portion of primary content; determining, a second sentiment score associated with a second portion of primary content; determining, secondary content (based on a difference between the first sentiment score and the second sentiment score); and causing the secondary content to be output between the first portion of primary content and the second portion of primary content. Independent claim 15 comprises steps of: determining, reaction information associated with the one or more segments of primary content (based on text data associated with one or more segments of primary content); determining, one or more intensity scores associated with the one or more segments of primary content (based on audio data associated with the one or more segments of primary content,); determining, an average sentiment score associated with the one or more segments of primary content (based on the reaction information and the one or more intensity scores); and causing output of the secondary content The independent claims are directed to a method for providing targeted secondary content to a user based on sentiment/reaction scores of a user associated with user exposure to content . Accordingly, the claimed steps represent a method of organizing commercial interactions comprising advertising, marketing and sales activities, which falls within the “Certain Methods of Organizing Human Activity” abstract idea grouping, wherein all the claim steps can be seen as being part of the abstract idea of providing targeted secondary content to a user. In addition, the above claimed steps are steps of collecting/tracking data (transmitting, receiving, storing, gathering), analyzing data, making determinations/correlations/comparisons, and displaying/presenting data. All these steps, but for the use of generic computer components that execute them, are generic functions performed by general-purpose computers, which relate to concepts that can be performed in the human mind. Step 2A- Prong 2: The claims do not recite a computer, and therefore no additional elements are claimed. Since there are no additional elements, then the abstract idea is not integrated into a practical application. Step 2B: Since there are no additional elements and all the claim steps can be seen as being part of the abstract idea, there is no inventive concept present in the claims. The dependent claims likewise do not recite a computer. When considered as a whole, the same analysis with respect to Step 2A Prong Two and step 2B, apply to these additional elements. They cannot integrate a judicial exception into a practical application at Step 2A or provide an inventive concept in Step 2B. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1 and 3-5 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kalra et al., U.S. Patent No. 11,900,407. Regarding claim 1, Kalra discloses: (determining, based on reaction information associated with primary content and intensity information associated with the primary content, a first sentiment score associated with the primary content). (determining, based on the first sentiment score, one or more segments of secondary content, wherein each segment of secondary content of the one or more segments of secondary content is associated with one or more second sentiment scores). (causing, based on a relationship between the first sentiment score and a second sentiment score of the one or more second sentiment scores, a segment of secondary content to be output). Systems and methods associated with providing content personalization are disclosed. In one embodiment, an exemplary method may comprise receiving first data including content preferences associated with an audience, receiving second data including an initial digital message being proposed for transmission to the audience, generating a recommendation data set based at least in part on the first data and the second data, wherein the recommendation data set identifies at least one recommended content type and at least one recommended message type, determining via a natural language generation machine learning model suggested content for the audience, and providing the suggested content for dissemination to the audience. (see at least Kalra, abstract, ¶1:55-2:21). (primary content). First data including content preferences (see at least Kalra, abstract, fig. 5, ¶9:37-39). Generate customer preferences. The customer predictor 310 operates to receive input such as audience data 312, content data 314, and message type data 316, which is in turn provided to a prediction model 318 to generate customer preferences 319. (see at least Kalra, fig. 3, 5, ¶8:5-48). (a first sentiment score associated with the primary content). Content analyzer 320 operates to receive input such as sample data/content 322, which is in turn provided to a content model 324 to generate content scores 326. Further, the content model 324 may be configured for scoring content along a plurality of dimensions, for example, dimensions of various meta data such as emotion, sentiments, intents, and the like. (see at least Kalra, fig. 3, ¶8:27-38) (a first sentiment score associated with the primary content). Equipped with the knowledge of the categorized and tagged content collected in the sentiment repository 656, a machine learning model (e.g., an emotion engine 654) may be configured to score items of content based on the respective category. In some embodiments, based on the scoring of items, an item of content may be verified in terms of whether the content incurs the desired sentimental effect. In other embodiments, also based on the scoring of items, items of content may be recommended to effectuate the desired sentimental effect. (see at least Kalra, fig. 6C, ¶8:27-38) (determining, based on the first sentiment score, one or more segments of secondary content, wherein each segment of secondary content of the one or more segments of secondary content is associated with one or more second sentiment scores). Receiving second data including an initial digital message being proposed for transmission to the audience, generating a recommendation data set based at least in part on the first data and the second data, wherein the recommendation data set identifies at least one recommended content type and at least one recommended message type, determining via a natural language generation machine learning model suggested content for the audience, and providing the suggested content for dissemination to the audience. (see at least Kalra, abstract, ¶1:55-2:21) Receiving second data comprising an initial digital message being proposed for transmission to the audience, at 504; generating a recommendation data set based on the first data and the second data (based on the first sentiment score), at 506 (one or more segments of secondary content); determining, by a natural language machine learning model, suggested content for the audience, at 508 (one or more segments of secondary content); and providing the suggested content for dissemination to the audience, at 510. (see at least Kalra, fig. 5, ¶9:39-49). Above recommendation data set and/or the suggested content (and/or second data) teaches the claimed “one or more segments of secondary content” (one or more segments of secondary content) . In Kalra, first data is classified and scored along a plurality of dimensions, for example, dimensions of various meta data such as emotion, sentiments, intents, and a further recommendation data set (one or more segments of secondary content) is further generated based at least in part on the first data and the second data (causing, based on a relationship between the first sentiment score and a second sentiment score of the one or more second sentiment scores), wherein this recommendation is matched to the emotional and sentiment valuation assessed on the first (and/or second ) data. . (see at least Kalra, fig. 4, ¶:14-9:32). Based on the emotional/sentiment context characterization of the predicted customer preferences (e.g., customer intelligence) a relevant and engaging message (one or more segments of secondary content) is generated (see at least Kalra, fig. 4, ¶9:14-9:32). Based on the emotional/sentiment context characterization of the predicted customer preferences (e.g., customer intelligence) suggested content for the audience 508 (one or more segments of secondary content) (see at least Kalra, fig. 5, ¶9:9:50-64). (secondary content of the one or more segments of secondary content is associated with one or more second sentiment scores). Content analyzer 320 operates to receive input such as sample data/content 322, which is in turn provided to a content model 324 to generate content scores 326. Further, the content model 324 may be configured for scoring content along a plurality of dimensions, for example, dimensions of various meta data such as emotion, sentiments, intents, and the like. Based on the intelligence gleaned by the audience predicator 310 and the content analyzer 320, the content recommendation engine 300 recommends personalized items of content (one or more second sentiment scores). (see at least Kalra, fig. 3, ¶8:27-49). (causing, based on a relationship between the first sentiment score and a second sentiment score of the one or more second sentiment scores, a segment of secondary content to be output). A messaging engine 720 may be configured to transmit (causing, to be output). (a segment of secondary content to be output) personalized messages 796, 798, and 799 to customers at respective personalized communication channels (see at least Kalra, fig. 7A, ¶15:64-16:8). Regarding claim 3, Kalra discloses: All the limitations of the corresponding parent claims (claim 1) as per the above rejection statements. Kalra further discloses: (one or more objects in the primary content). Plurality of context objects (see at least Kalra, fig. 4, ¶8:57-9:13). Regarding claim 4, Kalra discloses: All the limitations of the corresponding parent claims (claim 1) as per the above rejection statements. Kalra discloses: (wherein determining the one or more segments of secondary content comprises selecting, from among one or more secondary content items, one or more segments of secondary content associated with a second sentiment score closest to the first sentiment score). Content messages associated with a plurality of sentiments based at least in part on the recommended data set to identify one or more correlations between the recommended data set and one or more content messages (see at least Kalra, ¶9:52-55). In some embodiments, the content framework 652 is configured to categorize and tag elements of content along the dimensions of emotion, tone, message type, semantic similarity, and/or other content similarity (closest), and the like (see at least Kalra, ¶14:37-40). Regarding claim 5, Kalra discloses: All the limitations of the corresponding parent claims (claim 1) as per the above rejection statements. Kalra discloses: (wherein the relationship between the first sentiment score and the second sentiment score comprises one or more of a similarity or a difference). (see at least Kalra, ¶14:37-40). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Kalra et al., U.S. Patent No. 11,900,407 in view of Zhang et al. (CN 11/3255755). Examiner’s note: For the purpose of examining the instant claims hereinafter, an Espacenet English machine-translation of Zhang et al. (CN 11/3255755) will be utilized. Regarding claim 2, Kalra discloses: All the limitations of the corresponding parent claims (claim 1) as per the above rejection statements. Kalra does not disclose: (wherein the first sentiment score comprises a concatenation of a multidimensional valence vector associated with one or more segments of primary content and a multidimensional intensity vector associated with the one or more segments of primary content, and wherein the second sentiment score comprises a concatenation of a multidimensional valence vector associated with the one or more segments of secondary content and a multidimensional intensity vector associated with the one or more segments of secondary content), but Zhang teaches this limitation. Zhang discloses: A multimodal sentiment classification method based on a heterogeneous fusion network, wherein the method extracts three modal data of text, picture and audio from videos posted by network users, and uses a heterogeneous fusion network model based on deep learning to respectively identify the sentiment categories of text (one or more objects in the primary content), picture, audio and the overall video (see at least Zhang, ¶7). Fusion methodology as per above, therefore concatenation. Concatenation one or more of: the text feature vector, with audio feature vector and the image feature vector (see at least Zhang, ¶32, 37, 48, 65, 120, 130, 145, 173) (concatenation of a multidimensional valence vector). Input data comes from the video sentiment classification dataset CMU-MOSI. The sentiment class labels of this dataset are represented by elements in {-3, -2, -1, 0, 1, 2, 3}, with a total of 7 types, of which -3, -2 and - 1 represent negative, and 0, 1, 2 and 3 represent non-negative (multidimensional valence vector). The input data includes complete videos and video clips, all of which are extracted into three modal data types: text, pictures, and audio. (see at least Zhang755, ¶98). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the sentiment scoring features in Kalra, with the fusion methodology of Zhang, since this heterogeneous fusion, multimodal sentiment classification method facilitates mining different granular sentiment features within various modal data (see at least Zhang, ¶6-7), thereby enhancing the overall sentiment content recommendation system of Kalra. Claims 6-7, are rejected under 35 U.S.C. 103 as being unpatentable over Kalra et al., U.S. Patent No. 11,900,407 in view of Beisel et al., U.S. Patent No. 11,373,446. Regarding claim 6, Kalra discloses: All the limitations of the corresponding parent claims (claim 1) as per the above rejection statements. Kalra does not disclose: (wherein causing the one or more segments of secondary content to be output comprises: sending a request for the one or more segments of secondary content; receiving, based on the request for the one or more segments of secondary content, the one or more segments of secondary content; and displaying the one or more segments of secondary content). Beisel discloses: Displaying second content to the user corresponding to (in association with) an adjusted emotion state (a second emotion state) (see at least Beisel, fig. 3, ¶17:55-18:11). Content request (see at least Beisel, fig. 3, ¶9:45-53; 28:22-30). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to expand the feature of targeting a user sentiment about particular content in Kalra, with the comparable feature of targeting a user sentiment about particular content of Beisel, since this expansion based on user sentiment, enables content providers to determine whether the content is having an intended effect (see Beisel, ¶1:66-2:6). Moreover, this implementation would be a simple substitution of one known element of targeting a user sentiment about particular content element (a second emotion state) for another, and the substitution produces no new and unexpected result Regarding claim 7, Kalra discloses: All the limitations of the corresponding parent claims (claim 1) as per the above rejection statements. Kalra discloses: Dynamic tracking of user sentiment preferences (updating a sentiment profile) (see at least Kalra, fig. 3, ¶8:5-48). Kalra does not disclose: (receiving an indication that a user has navigated away from the one or more segments of secondary content; and based on the user navigating away from the secondary content, updating a sentiment profile associated with the user). Beisel discloses: In response to a user interacting with a second content (navigating away from the one or more segments of secondary content) a second emotion state of the user (associated with presentation of the second content to the user) is determined, and the emotional profile of the user is updated (updating a sentiment profile associated with the user) (see at least Beisel, fig. 3, ¶17:55-18:11). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to expand the feature of targeting a user sentiment about particular content in Kalra, with the comparable feature of targeting a user sentiment about particular content of Beisel, since this expansion based on user sentiment, enables content providers to determine whether the content is having an intended effect (see Beisel, ¶1:66-2:6). Moreover, this implementation would be a simple substitution of one known element of targeting a user sentiment about particular content element for another (a second emotion), and the substitution produces no new and unexpected result. Claims 8, 10-11 and 13-14 are rejected under 35 U.S.C. 103 as being unpatentable over Burger et al., U.S. Patent Application Publication 2015/0296239 in view of Beisel et al., U.S. Patent No. 11,373,446. Regarding claim 8, Burger discloses: Embodiments related to selecting advertisements for display to targeted viewers are disclosed. In one example embodiment, an advertisement is selected by, for each of a plurality of advertisements, aggregating a plurality of emotional response profiles from a corresponding plurality of prior viewers of the advertisement to form an aggregated emotional response profile for the advertisement, wherein each of the emotional response profiles comprises a temporal record of a prior viewer's emotional response to the advertisement. The method further includes identifying a group of potentially positively correlated viewers for the targeted viewer, filtering the aggregated emotional response profiles based on the group of potentially positively correlated viewers, selecting a particular advertisement from the plurality of advertisements based on a correlation of the filtered aggregated emotional response profiles, and sending the particular advertisement for display to the targeted viewer. (see at least Burger, abstract). Sentiment scoring (see at least Burger, ¶12-13, 41). Just as an advertising campaign including several advertisements related to an overall brand concept may be managed by selecting particular advertisements in view of the targeted viewer's emotional status, individual advertisements may be similarly customized to the targeted viewer. For example, a first portion of an advertisement (a first portion of primary content) may be provided to the targeted viewer and the emotional response of the targeted viewer may be monitored (a first sentiment) (see at least Burger, ¶50). In response to the provision of a first portion and its corresponding emotional response, one of a plurality of second portions of that advertisement (a second portion of primary content) may be selected to be presented to the targeted viewer based on the emotional response to the first portion (a second sentiment). Thus, if the targeted viewer's emotional expression was judged favorable to a dramatic mountain vista shown in an opening scene of the first portion advertisement, a second portion including additional rugged mountain scenery may be selected instead of an alternative second portion that includes a dynamic office environment. Thus, in some embodiments, 232 may include, at 244, selecting a first portion of the particular advertisement to be sent to the targeted viewer. (see at least Burger, ¶50). Burger further discloses: Sentiment scoring (a first sentiment score) (see at least Burger, ¶12-13, 41); wherein, since in Burger as per above, a second portion of a given advertisement is selected due to its particular, known association to the sentiment score of the first portion, then Burger teaches, (a second sentiment score) (determining, a first sentiment score associated with a first portion of primary content and a second sentiment score associated with a second portion of primary content). Burger does not disclose: (determining, based on a difference between the first sentiment score and the second sentiment score, secondary content). However, Beisel discloses: Devices and methods are provided for using an interactive media to select content based on a user emotion. The device may receive user data associated with presentation of first content at a first time. The device may determine, based on the user data, a first emotion state of a user. The device may determine a target emotion state for the user at a second time. The device may determine a difference between the first emotion state and the target emotion state. The device may determine, based on the difference, a function associated with content determination. The device may determine, based on the function, second content for presentation to the user. (see at least Beisel, abstract). At block 306, the device may determine a difference between the first emotion state (e.g., an emotion with the score indicating the highest likelihood of the user exhibiting that emotion state compared to any other emotion state, or an emotion state with a score satisfying a score threshold) and a target emotion state (see at least Beisel, fig. 3A, ¶16:3-8). At block 308, the device may determine, based on the difference between the user's first emotion state and the target emotion state, a function associated with content determination. The function may measure how significant an emotional transition from the user's first emotion state to the target emotion state may be given an amount of time to cause the target emotional state (e.g., the amount of time to cause a transition from a happiness score of 25 to a happiness score of 75). (see at least Beisel, fig. 3A, ¶16:52-60). At block 310, the device may determine, based on the function, and the desired emotion, second content to be presented to the user. when the difference between the first emotion state and the target emotion state is significant, the device may identify content known to transition from a particular emotion state, or known to be associated with an emotional score indicating a high likelihood of an emotion (e.g., a score of 100 out of 100 for causing a smile). The content may be different content than the content displayed when the user data was captured, or may include a different quantity or other type of alteration to the content displayed with the user data was captured. (see at least Beisel, fig. 3A, ¶17:9:25). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to select the sentiment of the second portion/content in Burger, based on being different than the sentiment of the first portion/content as taught by Beisel, since this expansion based on user sentiment, enables content providers to determine whether the content is having an intended effect (see Beisel, ¶1:66-2:6). Moreover, this implementation would be a simple substitution of one known element of the second portion/content sentiment being based on the first portion/content sentiment, for another, and the substitution produces no new and unexpected result. (causing the secondary content to be output between the first portion of primary content and the second portion of primary content) . At 262, selecting a second portion of the particular advertisement based on the targeted viewer's emotional response to the first portion of the particular advertisement, and, at 264, sending the second portion of the particular advertisement to another computing device to be output for display to the targeted viewer, and further at 266 of such embodiments, method 200 includes outputting the second portion of the particular advertisement for display (see at least Burger, fig. 2A-2C, ¶58). At 272, outputting the related advertisement for display to the targeted viewer (see at least Burger, fig. 2A-2C, ¶59). Regarding claim 10, Burger in view Beisel discloses: All the limitations of the corresponding parent claims (claim 8) as per the above rejection statements. Burger discloses: Updating a sentiment profile (see at least Burger, abstract). Burger does not disclose: (receiving an indication that a user has navigated away from the secondary content; and based on the user navigating away from the secondary content, updating a sentiment profile associated with the user). Beisel discloses: In response to a user interacting with a second content (navigating away from the secondary content) a second emotion state of the user (associated with presentation of the second content to the user) is determined, and the emotional profile of the user is updated (updating a sentiment profile associated with the user) (see at least Beisel, fig. 3, ¶17:55-18:11). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to expand the feature of targeting a user sentiment about particular content in Burger, with the comparable feature of targeting a user sentiment about particular content of Beisel, since this expansion based on user sentiment, enables content providers to determine whether the content is having an intended effect (see Beisel, ¶1:66-2:6). Moreover, this implementation would be a simple substitution of one known element of targeting a user sentiment about particular content element for another, and the substitution produces no new and unexpected result. Regarding claim 11, Burger in view Beisel discloses: All the limitations of the corresponding parent claims (claims 8 and 10; respectively) as per the above rejection statements. Burger discloses: (determining alternative secondary content by selecting the alternative secondary content from among one or more alternative secondary content items). In response to the provision of a first portion and its corresponding emotional response, one of a plurality of second portions of that advertisement (determining, based on the first sentiment score, one or more segments of secondary content) may be selected to be presented to the targeted viewer based on the emotional response to the first portion. (see at least Burger, ¶50) Supplemental advertisement content related to an advertisement being watched (see at least Burger, ¶20). the related advertisement may include content supplementary to the particular advertisement (see at least Burger, ¶60). If the aggregated emotional response profiles for a group of viewers show a relatively higher emotional response for advertisements depicting intense outdoor recreation, then similar advertisements for other products may be selected for future display to the targeted viewer. (see at least Burger, ¶43). Regarding claim 13, Burger in view Beisel discloses: All the limitations of the corresponding parent claims (claim 8) as per the above rejection statements. Burger discloses: (wherein a relationship between the first sentiment score and the second sentiment score comprises a first degree of similarity). Based on similarities (see at least Burger, ¶32). Based on a preselected threshold (a first degree), the advertisement may be selected for display to the targeted viewer (see at least Burger, ¶42). Regarding claim 14, Burger in view Beisel discloses: All the limitations of the corresponding parent claims (claims 8 and 13;) as per the above rejection statements. Burger discloses: (wherein the secondary content comprises one or more advertisements). (see at least Burger, abstract, fig. 1,3, ¶11-13, 50). Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Burger et al., U.S. Patent Application Publication 2015/0296239 in view of Beisel et al., U.S. Patent No. 11,373,446 and further in view of Zhang et al. (CN 11/3255755). Regarding claim 9, Burger in view Beisel discloses: All the limitations of the corresponding parent claims (claim 8) as per the above rejection statements. The Burger/Beisel combination formulated in the rejection of claim 8 does not disclose: (wherein each of the first sentiment score and the second sentiment score comprise a concatenation of a multidimensional valence vector and a multidimensional intensity vector), but Zhang teaches this limitation. However, Zhang discloses: A multimodal sentiment classification method based on a heterogeneous fusion network, wherein the method extracts three modal data of text, picture and audio from videos posted by network users, and uses a heterogeneous fusion network model based on deep learning to respectively identify the sentiment categories of text (closed caption data associated with the primary content), picture, audio and the overall video. (see at least Zhang, ¶7). Fusion methodology as per above, therefore concatenation. Concatenation one or more of: the text feature vector, with audio feature vector and the image feature vector (see at least Zhang, ¶32, 37, 48, 65, 120, 130, 145, 173) (concatenation of a multidimensional valence vector). Input data comes from the video sentiment classification dataset CMU-MOSI. The sentiment class labels of this dataset are represented by elements in {-3, -2, -1, 0, 1, 2, 3}, with a total of 7 types, of which -3, -2 and - 1 represent negative, and 0, 1, 2 and 3 represent non-negative (multidimensional valence vector). The input data includes complete videos and video clips, all of which are extracted into three modal data types: text, pictures, and audio. (see at least Zhang755, ¶98). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the sentiment scoring features in Burger, with the fusion methodology of Zhang, since this heterogeneous fusion, multimodal sentiment classification method facilitates mining different granular sentiment features within various modal data (see at least Zhang, ¶6-7), thereby enhancing the overall sentiment content recommendation system of Burger. Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Burger et al., U.S. Patent Application Publication 2015/0296239 in view of Chopdekar et al. U.S. Patent Application Publication 2022/0114344 and further in view of Shteyn et al. U.S. Patent Application Publication 2009/0226046. Regarding claim 12, Burger in view Beisel discloses: All the limitations of the corresponding parent claims (claims 8 and 10-11) as per the above rejection statements. Burger discloses: Video content (see at least Burger, ¶27, 29, 31, 47-48). Ad slot break (see at least Burger fig. 2B, ¶48). The Burger/Beisel combination formulated in the rejection of claim 8 does not disclose: (further comprising causing a media device to insert the alternative secondary content between the first portion of primary content and the second portion of primary content). However, Shteyn discloses: A method of characterizing a program includes defining a scene as a portrayal of an emotion of a first character and identifying each scene within a first program to apportion the first program into a series of scenes. An emotional profile of the first program is built according to the series of scenes. Recommendation of a program includes correlating the emotional profile of the first program with a user preference profile. (see at least Shteyn, ¶6-7), The advertisement module 680 is configured to insert advertisements into a program via an interruptive function 682 or a parallel function 684. The interruptive function 682 places an advertisement between otherwise consecutive scenes of the program while the parallel function 684 displays advertisements in parallel with one or more scenes. In other words, in the parallel function 684, the advertisement is displayed simultaneously with one or more scenes in the form of a caption, picture-in-picture, subtitle or other mechanism. (see at least Shteyn, fig. 7, ¶88). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Burger with the ad insertion between two scenes feature of Shteyn, since with this arrangement, a user would better appreciate the smoother flow from a high-definition scene to high-definition advertisement (see Shteyn, ¶90). Moreover, it would have been obvious to try, inserting one or more advertisements between two scenes of video content, since this insertion mode is one of a finite number of identified, predictable scenarios (a finite number of identified, predictable potential solutions) to the recognized need of inserting ad content into a video content, and one of ordinary skill in the art could have pursued the known potential solutions with a reasonable expectation of success. Claims 15-19, are rejected under 35 U.S.C. 103 as being unpatentable over Kalra et al., U.S. Patent No. 11,900,407 in view of Chopdekar et al. U.S. Patent Application Publication 2022/0114344. Regarding claim 15, Kalra discloses: (determining, based on text data associated with one or more segments of primary content, reaction information associated with the one or more segments of primary content); (primary content). First data including content preferences (see at least Kalra, abstract, fig. 5, ¶9:37-39). Generate customer preferences. The customer predictor 310 operates to receive input such as audience data 312, content data 314, and message type data 316, which is in turn provided to a prediction model 318 to generate customer preferences 319. (see at least Kalra, fig. 3, 5, ¶8:5-48). (determining, based on text data). (see at least Kalra, abstract, ¶17:1-23, 20:47-54). (sentiment score associated with the primary content). Content analyzer 320 operates to receive input such as sample data/content 322, which is in turn provided to a content model 324 to generate content scores 326. Further, the content model 324 may be configured for scoring content along a plurality of dimensions, for example, dimensions of various meta data such as emotion, sentiments, intents, and the like. (see at least Kalra, fig. 3, ¶8:27-38). (sentiment score associated with the primary content). Equipped with the knowledge of the categorized and tagged content collected in the sentiment repository 656, a machine learning model (e.g., an emotion engine 654) may be configured to score items of content based on the respective category. In some embodiments, based on the scoring of items, an item of content may be verified in terms of whether the content incurs the desired sentimental effect. In other embodiments, also based on the scoring of items, items of content may be recommended to effectuate the desired sentimental effect. (see at least Kalra, fig. 6C, ¶8:27-38) (secondary content) Receiving second data comprising an initial digital message being proposed for transmission to the audience, at 504; generating a recommendation data set based on the first data and the second data (based on the first sentiment score), at 506 (one or more segments of secondary content); determining, by a natural language machine learning model, suggested content for the audience, at 508 (one or more segments of secondary content); and providing the suggested content for dissemination to the audience, at 510 (see at least Kalra, fig. 5, ¶9:39-49). In Kalra, first data is classified and scored along a plurality of dimensions, for example, dimensions of various meta data such as emotion, sentiments, intents, a further recommendation data set (one or more segments of secondary content) is further generated based at least in part on the first data and the second data, wherein this recommendation is matched to the emotional and sentiment valuation assessed on the first (and/or second) data (see at least Kalra, fig. 4, ¶:14-9:32). It follows from this, therefore that Kalra implicitly teaches a second sentiment score of secondary content (a second sentiment score associated with secondary content, causing output of the secondary content). Based on the emotional/sentiment context characterization of the predicted customer preferences (e.g., customer intelligence) a relevant and engaging message (secondary content) is generated (see at least Kalra, fig. 4, ¶9:14-9:32), and likewise, based on the emotional/sentiment context characterization of the predicted customer preferences (e.g., customer intelligence) suggested content for the audience 508 (secondary content) is generated (see at least Kalra, fig. 5, ¶9:9:50-64). (a second sentiment score associated with secondary content, causing output of the secondary content). Content analyzer 320 operates to receive input such as sample data/content 322, which is in turn provided to a content model 324 to generate content scores 326. Further, the content model 324 may be configured for scoring content along a plurality of dimensions, for example, dimensions of various meta data such as emotion, sentiments, intents, and the like. Based on the intelligence gleaned by the audience predicator 310 and the content analyzer 320, the content recommendation engine 300 recommends personalized items of content (one or more second sentiment scores). (see at least Kalra, fig. 3, ¶8:27-49). (causing output of the secondary content). A messaging engine 720 may be configured to transmit (causing, to be output) personalized messages 796, 798, and 799 to customers at respective personalized communication channels (see at least Kalra, fig. 7A, ¶15:64-16:8). Kalra does not disclose: (determining, based on audio data associated with the one or more segments of primary content, one or more intensity scores associated with the one or more segments of primary content), but Chopdekar teaches this limitation. Chopdekar discloses: (see at least Chopdekar, ¶14, “In some embodiments, data mining and machine learning tools and techniques are used to manage information, including analyzing content (one or more segments of primary content) and determining sentiment (reaction information). For example, data mining and machine learning may be used to determine communication information and sentiment information for various pieces of content including text (based on text data), audio (based on audio data), and/or visual data, to analyze the information, including comparing the sentiment information, and to manage information. Machine learning based models can have information about synonyms and antonyms associated with sentiments, for example, so they can identify synonymous and antonymous sentiments, as well as levels of synonymy (e.g., that a word may be more synonymous to one word than another). In some embodiments, levels of synonymy can correspond to weight scores. For example, the sentiments of sadness and grief may be classified as synonymous to the sentiment of anguish. However, the sentiment of grief may be more synonymous to the sentiment of anguish than the sentiment of sadness is to the sentiment of anguish. Likewise, if weight scores are assigned to these sentiments, a weight score of anguish may be closer to a weight score of grief than a weight score of sadness is to the weight score of grief (e.g., anguish may have a weight score of 90, grief may have a weight score of 80, and sadness may have a weight score of 55). Any of the information may be modified and act as feedback to the system. This may be done without human input.”). (based on audio data). Audio data (see at least Chopdekar, ¶14, 19, 54, 132,) (intensity scores). Intensity of the sentiment (see at least Chopdekar, abstract, ¶17, 79). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to expand the sentiment scores in Kalra, with the comparable sentiment scores of Chopdekar, since this modification would be a simple substitution of one known sentiment score determination element (the intensity feature and the based on audio data feature taught by Chopdekar), for another (the sentiment scoring of Kalra), and the substitution produces no new and unexpected result. Moreover, this modification implementation enhances the scoring functionality of Kalra. Chopdekar further discloses: The system weighs the intensity of the reaction to textual and audio content. Weight score (sentiment score) that is based on an intensity of a type of sentiment (see at least Chopdekar, ¶137, 169). Each piece or pieces of content (e.g., each portion of text, video, and/or audio within a piece of content) may be analyzed for sentiment. In some embodiments, the sentiment analyzer may determine a weight score for each sentiment and rank the sentiments that are detected for each portion of content (e.g., each of the communications received for a group within a certain timeframe, or each piece of content within the most recent communication received, etc.) in order of their weight scores, and then rank all of the portions of content and their associated sentiments. In various embodiments, the content having the associated sentiment with the highest weight score is determined to be the current sentiment of the content, and may be set as the current sentiment of the group. (see at least Chopdekar, ¶19, see also ¶14, 17, 24). In various embodiments, a confidence score may also be taken into consideration (see at least Chopdekar, ¶20) wherein it is noted an average score value is within the algebraic context of taking into consideration a confidence score. Chopdekar does not specifically disclose: (an average sentiment score associated with the one or more segments of primary content). However, since as per above, Chopdekar teaches: Different ways to score (weigh) sentiments, and further teaches that based on these various ways one particular weight score is determined to be representative of the current sentiment of the content, and further teaches that a confidence score could be considered, then it would have been obvious to try, by one of ordinary skill in the art at the time of the invention, modifying the particular highest weight score criteria determination of Chopdekar to include an average sentiment score, since an average sentiment score is one of a finite number of identified, predictable scenarios (a finite number of identified, predictable potential solutions) to the recognized need of defining and determining a significant sentiment score, and one of ordinary skill in the art could have pursued the known potential solutions with a reasonable expectation of success. The above formulated Kalra/Chopdekar combination does not disclose: (determining, based on the reaction information and the one or more intensity scores, an average sentiment score associated with the one or more segments of primary content). (based on a correlation between the average sentiment score and a second sentiment score associated with secondary content, causing output of the secondary content), However, since Kalra teaches a second sentiment score associated with secondary content; and a determination based on a correlation between a first sentiment score and a second sentiment score associated with secondary content; and Chopdekar further teaches an average sentiment score; then it would have been obvious to try, by one of ordinary skill in the art at the time of the invention, modifying the teaching of a correlation between a first sentiment score and a second sentiment score associated with secondary content sentiment score determination in Kalra (and/or in the above Kalra/Chopdekar combination), further in view of the average sentiment score teaching of Chopdekar, since this modification would be a simple substitution of one known sentiment score determination element (the average sentiment score taught by Chopdekar), for another (the sentiment scoring of Kalra), and the substitution produces no new and unexpected result. Moreover, this average implementation takes into account all other score determinations. Regarding claim 16, Kalra in view of Chopdekar discloses: All the limitations of the corresponding parent claims (claim 15) as per the above rejection statements. Kalra discloses: (text data). (see at least Kalra, abstract, ¶17:1-23, 20:47-54). Chopdekar discloses: (audio data) (see at least Chopdekar, ¶14, 19, 54, 132). Regarding claim 17, Kalra in view of Chopdekar discloses: All the limitations of the corresponding parent claims (claim 15) as per the above rejection statements. The Kalra/Chopdekar combination formulated in the rejection of claim 15 discloses: (wherein the reaction information is configured to indicate how positive or negative the one or more segments of primary content are). Kalra discloses: the sentiment category may be comprised of 2 or more subcategories selected from positive; neutral; and/or negative (see at least Kalra, ¶11:31-33, 13:19-28, 14:18-27). The Kalra/Chopdekar combination formulated in the rejection of claim 15 discloses: intensity of a sentiment (how positive or negative). Regarding claim 18, Kalra in view of Chopdekar discloses: All the limitations of the corresponding parent claims (claim 15) as per the above rejection statements. Kalra discloses: The sentiment category may be comprised of 2 or more subcategories selected from positive; neutral; and/or negative (see at least Kalra, ¶11:31-33, 13:19-28). The Kalra/Chopdekar combination formulated in the rejection of claim 15 discloses: (wherein the one or more intensity scores are configured to indicate how intensely a sentiment is expressed in the one or more segments of primary content) intensity of a sentiment (how positive or negative). Regarding claim 19, Kalra in view of Chopdekar discloses: All the limitations of the corresponding parent claims (claim 15) as per the above rejection statements. The Kalra/Chopdekar combination formulated in the rejection of claim 15 discloses: (wherein the one or more segments of primary content are one or more of: non-fictional news coverage or fictional content) (see at least Kalra, ¶13:25-28, 15:39-42). Claim 20 is rejected under 35 U.S.C. 103 as being unpatentable over Kalra et al., U.S. Patent No. 11,900,407 in view of Chopdekar et al. U.S. Patent Application Publication 2022/0114344 and further in view of Shteyn et al. U.S. Patent Application Publication 2009/0226046. Regarding claim 20, Kalra in view Chopdekar discloses: All the limitations of the corresponding parent claims (claims 8 and 10-11) as per the above rejection statements. Kalra discloses: Video content (see at least Kalra, ¶26-13-22). Chopdekar discloses: Video content (see at least Chopdekar, ¶19, 54). The Kalra/Chopdekar combination formulated in the rejection of claim 15 does not disclose: (further comprising causing a media device to insert the secondary content into the one or more segments of primary content). However, Shteyn discloses: A method of characterizing a program includes defining a scene as a portrayal of an emotion of a first character and identifying each scene within a first program to apportion the first program into a series of scenes. An emotional profile of the first program is built according to the series of scenes. Recommendation of a program includes correlating the emotional profile of the first program with a user preference profile. (see at least Shteyn, ¶6-7), The advertisement module 680 is configured to insert advertisements into a program via an interruptive function 682 or a parallel function 684. The interruptive function 682 places an advertisement between otherwise consecutive scenes of the program while the parallel function 684 displays advertisements in parallel with one or more scenes. In other words, in the parallel function 684, the advertisement is displayed simultaneously with one or more scenes in the form of a caption, picture-in-picture, subtitle or other mechanism. (see at least Shteyn, fig. 7, ¶88). Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Burger with the ad insertion between two scenes feature of Shteyn, since with this arrangement, a user would better appreciate the smoother flow from a high-definition scene to high-definition advertisement (see Shteyn, ¶90). Moreover, it would have been obvious to try, inserting one or more advertisements between two scenes of video content, since this insertion mode is one of a finite number of identified, predictable scenarios (a finite number of identified, predictable potential solutions) to the recognized need of inserting ad content into a video content, and one of ordinary skill in the art could have pursued the known potential solutions with a reasonable expectation of success. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHAHID R MERCHANT whose telephone number is (571)270-1360. The examiner can normally be reached M-F 7:30-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Namrata Boveja can be reached at 571-272-8105. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Shahid Merchant/ Supervisory Patent Examiner, Art Unit 3684
Read full office action

Prosecution Timeline

Apr 14, 2023
Application Filed
May 15, 2025
Non-Final Rejection — §101, §102, §103
Aug 07, 2025
Interview Requested
Sep 19, 2025
Response Filed
Feb 20, 2026
Final Rejection — §101, §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 8224723
ACCOUNT OPENING SYSTEM, METHOD AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Jul 17, 2012
Patent 8204810
SYSTEM AND METHOD FOR MATCHING AN OFFER WITH A QUOTE
2y 5m to grant Granted Jun 19, 2012
Patent 8195517
SYSTEM AND METHOD FOR FACILITATING A FINANCIAL TRANSACTION WITH A DYNAMICALLY GENERATED IDENTIFIER
2y 5m to grant Granted Jun 05, 2012
Patent 8185464
METHOD OF MAKING DISTRIBUTIONS FROM AN INVESTMENT FUND
2y 5m to grant Granted May 22, 2012
Patent 8165946
CUSTOMIZED FINANCIAL TRANSACTION PRICING
2y 5m to grant Granted Apr 24, 2012
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
29%
Grant Probability
54%
With Interview (+25.2%)
4y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 136 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month