Prosecution Insights
Last updated: April 19, 2026
Application No. 18/215,619

SYSTEMS AND METHODS FOR PRESENTATION OF MEDIA CONTENT TO MULTIPLE USERS

Final Rejection §101§102§103
Filed
Jun 28, 2023
Examiner
CHEN, BILL
Art Unit
3626
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Adeia Guides Inc.
OA Round
2 (Final)
0%
Grant Probability
At Risk
3-4
OA Rounds
3y 0m
To Grant
0%
With Interview

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 9 resolved
-52.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 0m
Avg Prosecution
15 currently pending
Career history
24
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
32.3%
-7.7% vs TC avg
§102
24.0%
-16.0% vs TC avg
§112
6.6%
-33.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 9 resolved cases

Office Action

§101 §102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Status of Claims The office action is being examined in response to the application filed by the applicant on July 1, 2025. Claims 1 – 3, 6 - 13 and 16 – 20 have been amended and have been hereby entered. Claims 1 - 20 are pending and have been examined. This action is made FINAL. The examiner would like to note that this application is now being handled by examiner Bill Chen. Response to Arguments Applicant’s arguments filed on July 1st, 2025 have been fully considered but they are not persuasive. Regarding the Applicant’s arguments against the 101 rejection of claims 1 – 20 on pages 11 and 12: Applicant asserts that the independent claims are directed to a “technical solution to a technical problem” and analogizes the claims to those in DDR Holdings. However, the claims remain directed to the abstract idea of: collecting user information, analyzing that information to predict a future interaction, and presenting content prior to that interaction This constitutes: mental processes (evaluation, prediction, comparison, decision-making), and certain methods of organizing human activity, specifically interpersonal interactions and influencing social behavior through content coordination. The focus of the claims is not on improving computer functionality, but rather on improving the timing and relevance of information presented to users in anticipation of social interaction. That is a behavioral or organization objective, not a technological one. Further, Applicant relies on DDR Holdings, LLC v. Hotels.com. The reliance is misplaced. In DDR, the claims addressed a problem unique to the Internet—retaining website visitors when clicking third-party links—and required a specific technical solution that modified conventional web page generation in a way that overrode standard hyperlink behavior. In contrast, here, the problem identified (providing relevant content before a future meeting) is not unique to computer technology. The claims: (1) do not modify how computers operate, (2) do not change network architecture, data structures, or media rendering techniques, as well as (3) do not recite non-generic protocol or system configuration. Instead, the claims merely use generic computing components as tools to automate social coordination. Applicant further argues that manually determining relevant content before a future interaction is “laborious.” However, difficulty or complexity does not render an idea technological. The identified problem—selecting relevant content before a meeting—is fundamentally a social coordination problem, behavioral management problem, or even an information curation problem. It is not a problem rooted in computer architecture or network engineering. Automation of a mental or organization task does not transform it into a technical solution. Additionally, Applicant argues that the claims reduce operational load by minimizing user input or queries. This argument is not persuasive because the claims do not recite any specific technical mechanism that reduces processor load, memory usage, network traffic, or database operations. Further, there is no claimed improvement to resource allocation, caching, indexing, nor transmission efficiency. The alleged benefit is merely a user-experience improvement. In overview, the claims merely automate a decision about when to show content; they do not improve how the computer performs its operations. Applicant argues that “generating, for display, a media content item” is neither a mental process nor a method of organizing human activity. However, displaying the result of an abstract idea is considered an insignificant post-solution activity. The underlying focus of the claim remains: determining, predicting, deciding, and selecting. The display step does not meaningfully limit the abstract idea. Applicant’s amendments and arguments have been considered but they are not persuasive. The claims remain directed to mental processes, and methods of organizing human activity, implemented using generic computer components. The rejection under 35 U.S.C. § 101 is therefore maintained. Regarding the Applicant’s arguments against the 102/103 rejections of claims 1 – 20 on pages 12 – 13: Applicant’s arguments have been considered but are not persuasive. Applicant asserts that Sinha does not disclose “determining a predicted future interaction between the first user and the second user.” However, Sinha discloses predicting future user engagement behavior based on collected user data (see, e.g., Sinha ¶¶[0021]-[0022]). Sinha teaches generating predictions regarding future interactions with content using user behavior data. While Sinha may describe engagement prediction in the context of content interaction, predicting whether users will interact with content in the future constitutes predicting a future interaction. The claims do not require any particular form of interaction (e.g., in-person meeting, direct communication, or social media exchange). The broadest reasonable interpretation of “predicted future interactions between the first user and the second user” encompasses predicted engagement involving both users within a shared content context. Sinha’s disclosure of predicting future engagement based on multi-user data reasonably teaches this limitation. Applicant further argues that Sinha does not disclose generating the same content for display at respective devices prior to a predicted interaction. However, Sinha discloses presenting content based on predicted engagement outcomes. Delivering interactive content to users in response to predictive modeling inherently involves generating content for display at user devices before the predicted engagement event occurs. The claims do not require a specific technical mechanism for generation, nor do they require synchronization in a particular manner beyond presenting the content prior to the predicted interaction. Sinha’s predictive content presentation satisfies this limitation under a broad reading. Accordingly, Sinha teaches or reasonably suggests each limitation of independent claims 1 and 11. Therefore, the rejections under §§ 102 and 103 are maintained. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more, and therefore does not recite patent-eligible subject matter. Firstly, it should be stated that claim 1 will be representative of the independent claim set 1 and 11. Step 2A Prong 1: The claims are directed to presenting media content information to a first and second user, determining whether users consumed the media content, predicting future interaction between the first and second users, as well as displaying the predicted future interaction between the two users, which falls into the category of “mental processes” as well as “methods of organizing human activity”, more specifically “managing personal behavior or relationships or interactions between people (including social activities).” For instance, claim 1 recites: generating, for display, a first media content item at a first device to be presented to a first user and at a second device to be presented to and a second user; determining whether the first user and the second user have consumed the first media content item; based at least in part on determining that each of the first and second users has consumed the first user has consumed the first media content item at the first device and that the second user has consumed the first media content item at the second device; determining, a predicted future interaction between the first user and the second user; and identifying a second media content item comprising content related to the first media content item; and prior to the predicted future interaction occurring: generating, for display, the second media content item at the first device to be presented to the first user and at the second device to be presented to the second user These limitations describe a method and a system for a personalized, predictive media delivery system between two users. Thus, these limitations are directed to the abstract idea of a certain method of organizing human activity in the form of managing personal behavior or relationships or interactions between people as these claims recite the steps of causing media content to be displayed to the first and second users, determining whether each of the users have consumed/viewed the media content, determining a suggestive future interaction between the users, as well as then causing the second media content to be displayed to the users in response to user interaction—all of which are conceptual steps that could be performed mentally or with a pen and paper. The claim elements are being interpreted as concepts capable of being performed in the human mind (including observation, evaluation, judgment, and opinion). The determining steps recited are fundamentally a recognition/observation and cognitive task. The claims recite an abstract idea consistent with the “mental processes” grouping set forth in the MPEP 2106.04(a)(2)(III). Step 2A Prong 2: For independent claims 1 and 11, The claims do not integrate the abstract idea into a practical application. While the claims recite the use of control circuitry and concepts such as timing, trigger objects, or positive response thresholds, these do not impose a meaningful limitation on the abstract idea. These limitations: The hardware components such as “control circuitry” are described at a high level of generality, lacking any technical detail or specialized functionality The analysis and delivery logic (e.g., predicting an interaction and selecting timing) are conceptual and would occur similarly if performed by a human using pen and paper There is no technical improvement, specialized algorithm, interface, or unconventional architecture recited in the claims The claimed invention merely uses generic computing components to automate a fundamental mental task, which in this case is presenting media content to multiple users by analyzing user interaction and suggesting a future interaction. Alternatively, the additional elements or combination of elements other than the abstract idea itself include elements such as the “control circuitry” are recited at a high level of generality. The components merely apply the abstract idea using a generic computing environment, which is not sufficient to integrate the idea into a practical application (see MPEP 2106.05(f)). These elements do not themselves amount o an improvement to the interface or computer, to a technology, or another technical field. Step 2B: For independent claims 1 and 11, The claim elements, viewed individually and as an ordered combination, do not include any additional limitations that amount to significantly more than the judicial exception. The ‘control circuitry’ element(s) are conventional and generic computing components performing well-known functions such as storing, retrieving, and comparing data The arrangement of the steps—collecting user data, determining predicted interactions, and delivering content—is a routine sequence found in behavioral recommendation systems The claims lack any non-conventional steps, innovative algorithm, or inventive ordering of operations As indicated in the Step 2A Prong 2 analysis, the additional element(s) in the claims are merely, using a generic computer device or computing technologies and/or other machinery merely as a tool to a mere instruction to practice the invention. Thus, the claims do not render the claims as being eligible (refer to MPEP 2106.05(f) and 2106.05(h)). This is because the claimed invention must improve upon conventional functioning of a computer, or upon conventional technology or technological processes a technical explanation as to how to implement the invention should be present in the specification. The rationale set forth for the 2nd prong of the eligibility test above is also applicable and re-evaluated in the Step 2B analysis. Therefore, this rationale is sufficient for its rejection basis as it is not patent eligible and no comments are necessary as it is also consistent with MPEP 2106. For dependent claims 2 – 10 and 12 – 20, these claims cover or fall under the same abstract idea of a method of organizing human activity and mental processes. They describe additional limitations steps of: Claims 2 – 10 and 12 – 20: further describes the abstract idea of the method for determining when to show the content media, how to measure responses, whether users are in the same social group, scheduling delivery at convenient times, as well as using past user behavior to inform choices. Thus, being directed to the abstract idea group of mental processes as these functions encompass observation, evaluation, judgment, and opinion and can be performed mentally or in pen and paper. Step 2A Prong 2 and Step 2B: For dependent claims, these claims do not include additional elements but further instruct one to practice the abstract idea by using general computer components that merely are used as a tool. Thus, it amounts no more than mere instructions to apply the exception using a generic computer component (MPEP 2106.05(f)). Therefore, these claim limitations amount to no more than mere instructions to apply the exception using generic computer components and or computing technologies, refer to MPEP 2106.05(f). Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1 – 20 are rejected under 35 U.S.C. 102 as being unpatentable over Sinha (U.S. Pub No. 20220366299 A1). Regarding claims 1 and 11: Sinha discloses: generating, for display, a first media content item at a first device to be presented to a first user and at a second device to be presented to and a second user; [Fig. 1; ¶0031]: Content is generated and then transmitted to a client device to then be displayed on a client device for users to see. determining whether the first user and the second user have consumed the first media content item; (In ¶0021: teaches “From each user, the content provider system collects data indicating whether a corresponding user device of the user interacted with the message by opening the message and accessing a hyperlink embedded in one of the set of images. The data also includes a timestamp identifying when the image was accessed by the user.”) based at least in part on determining that each of the first and second users has consumed the first user has consumed the first media content item at the first device and that the second user has consumed the first media content item at the second device; (In ¶0021 – 0022: teaches “The content provider system then generates a training dataset that includes the data collected from the plurality of users. The machine-learning model, based on the training data that includes previous user-device actions, can thus be trained to generate the categorical value (e.g., a particular stage of a set of funnel stages) that represents a predicted user-engagement level of the particular user in response to presentation of future interactive content.”) determining, a predicted future interaction between the first user and the second user; and [¶0021, 0025]: A content provider system selects “follow-up” content to be generated and pushed to the user device of a particular user based on their previous interactions. identifying a second media content item comprising content related to the first media content item; and [¶0024]: The content provider system applies ML models to monitor user-activity and provide appropriate content. prior to the predicted future interaction occurring: generating, for display, the second media content item at the first device to be presented to the first user and at the second device to be presented to the second user (In ¶0023: teaches “The content provider system receives user-activity data of a particular user. The user-activity data includes one or more user-device actions performed by the particular user in response to another interactive content.”) Regarding claims 2 and 12: Sinha discloses: determining first user information relating to the first user; (In ¶0023: teachers “The content provider system receives user-activity data of a particular user, in which the user-activity data includes a first user-device action indicating the user viewing the video content at a first time point”) determining second user information relating to the second user; (In ¶0023: teaches “The content provider system receives user-activity data of a particular user, in which the user-activity data includes a first user-device action indicating the user viewing the video content at a first time point.” Additionally, ¶0023 also teaches “a second user-device action indicating the user creating, at a second time point, a user account in the website by accessing the hyperlink.”) determining whether each of the first and second user information corresponds to a parameter of the first media content item; and (In ¶0038: teaches “Depending on the comparison, one or more hyperparameters associated with the machine-learning model can be adjusted.”) causing, in response to determining that each of the first and second user information corresponds to the parameter of the first media content item, the first media content item to be presented to each of the first and second users. Regarding claims 3 and 13: Sinha discloses: determining whether a trigger object is in a field of view of each of the first and second users; and (In ¶0025: teaches “Continuing with this example, the content provider system transmits one or more programmable instructions that causes a browser of the user device to display the follow-up content as a pop-up window.”) causing, in response to determining that the trigger object is in the field of view of each of the first and second users, the first media content item to be presented to each of the first and second users. (In ¶0025: teaches “Continuing with this example, the content provider system transmits one or more programmable instructions that causes a browser of the user device to display the follow-up content as a pop-up window.” Additionally, ¶0025 teaches: “In some instances, the follow-up interactive content is a targeted interactive content generated specifically for the particular user.” Regarding claims 4 and 14: Sinha discloses: determining first user information relating to the first user, wherein the first user information comprises at least one of calendar data, geolocation data or virtual location data of the first user, or online comments of the first user; (In ¶0032: teaches “For example, a user-device action includes opening and responding to the interactive content, the user highlighting/commenting/annotating a specific area of the interactive content, sharing the interactive content with other users, reading part of the interactive content at a certain speed, clicking a hyperlink within the interactive content”) determining second user information relating to the second user, wherein the second user information comprises at least one of calendar data, geolocation data or virtual location data of the second user or online comments of the second user; and (In ¶0032: teaches “For example, a user-device action includes opening and responding to the interactive content, the user highlighting/commenting/annotating a specific area of the interactive content, sharing the interactive content with other users, reading part of the interactive content at a certain speed, clicking a hyperlink within the interactive content”) determining the predicted future interaction based on the first and second user information. (In ¶0032 – 0033; Fig. 1: teaches “In response to transmitting the interactive content 112, the content provider system receives user-activity data 118 from the client device 114. The content provider system 102 also includes a classifier subsystem 116 that applies the trained machine-learning model 104 to user-activity data 118 provided by the client device 114 so as to predict whether a user associated with the client device 114 will engage with a particular type of future interactive content.”) Regarding claims 6 and 16: Sinha discloses: determining a duration of the second media content item; and (In ¶0021 – 0022: teaches “The data also includes a timestamp identifying when the image was accessed by the user. The machine-learning model was trained by identifying a time period within which the previous user-device actions were performed. Continuing with this example, the earliest timestamp and the last timestamp of user-device actions identified from the training dataset are selected, and the timestamps are used to determine that the time period for training the machine-learning model is three months.”) causing the second media content item to be presented at the first device and the second device at a first point in time spaced before the predicted future interaction by at least the duration. (In ¶0032; Fig. 1: teaches “In response to transmitting the interactive content 112, the content provider system receives user-activity data 118 from the client device 114. The user-activity data 118 refers to one or more user-device actions performed by the client device 114 that are generated in response to presentation of the interactive content 112. The one or more user-device actions can be contextual to the interactive content that was presented. “) Regarding claims 7 and 17: Sinha discloses: determining a first convenient point in time for the first user using the first user information, and a second convenient point in time for the second user using the second user information, wherein each of the first and second convenient points in time precede the first point in time, and the first and second convenient points in time are different from each other; and (In ¶0037: teaches “identifying a time period within which the previous user-device actions were performed; splitting the time period into a set of time windows; and training, for each time window of the set of time windows, the machine-learning model using a subset of the training dataset. In some instances, the subset of the training dataset includes previous user-device actions identified as being performed within the time window.”) causing the second media content item to be presented at the first device at the first convenient point and to the second device at the second convenient point. (In ¶0038 – 0039; Fig. 2: teaches “The machine-learning model is then applied to the first and second previous user-device actions to generate another output including a categorical value that represents a predicted user-engagement level of the previous user in response to the presentation of the future interactive content. Regarding claims 8 and 18: Sinha discloses: determining a first window of time of the first user and a second window of time of the second user, each of the first and second windows of time having a start point and an end point in time; and (In ¶0051; Fig. 4: teaches “The window definition 404 also includes defining rolling time windows 408 within the time period 406. In particular, the time period 406 is divided into a set of rolling time windows 408. Each of the rolling time windows 408 defines a time range within the time period 406. “) determining a second point in time indicating a start of the predicted future interaction, wherein the end point of each of the first and second windows of time and the second point match one another; (In ¶0052; Fig. 4: teaches “The recency classification identifies a number of time windows the most recent previous user-device action was performed before a particular time point within the time period 406 (e.g., a time point corresponding to an end of training phase). The frequency classification identifies a number of times in which previous user-device actions were performed within the time period 406.”) setting the first convenient point in time in between the start point of the first window of time and the first point in time and setting the second convenient point in time in between the start point of the second window of time and the first point in time. (In ¶0051; Fig. 4: teaches “The window definition 404 also includes defining rolling time windows 408 within the time period 406. In particular, the time period 406 is divided into a set of rolling time windows 408. Each of the rolling time windows 408 defines a time range within the time period 406. The time range of the rolling time window can be configured by the training system, such that the machine-learning model trained using multiple rolling time windows produces more reliable (less sensitive to time) results than training the machine-learning model using single time window.” Additionally, ¶0051 also teaches “For example, a first rolling time window identifies a third position within a set of rolling time windows, and a second rolling time identifies a seventh position within the same set of rolling time windows.” [Examiner’s Note: Under BRI, the examiner interprets the claim language to read as setting and partitioning the windows of time to represent different groups/labels.]) Regarding claims 9 and 19: Sinha discloses: determining, in response to determining that each of the first and second users has consumed the first media content item, a positive response of the first user and a positive response of the second user to the first media content item; (In ¶0059; Fig. 5: teaches “The user-device action identifier 506 identifies a type of user-device action performed in response to the presentation of the interactive-content. Examples of the types of user-device actions include, but are not limited to the following: (i) opening an interactive-content file; (ii) sending a message in response to the interactive-content file; (iii) clicking the interactive-content file; (iv) generating a task by the content provider system in response to the user-device action; (v) indicating by the content provider system that the user performing the user-device action has changed; (vi) designating, by the content provider system, the user as an “Add to Nurture” status; and (vii) designating, by the content provider system, the user-device action as an “Interesting Moment” status.”) determining whether each of the positive response of the first user and the positive response of the second user has exceeded a positive response threshold; and (In ¶0055; Fig. 4: teaches “the training dataset 402 includes substantially less positive classes relative to negative classes. To combat this class imbalance, the training configuration 414 allows hyperparameter tuning 424 that includes adjusting hyperparameters to assign lower weight values to negative classes and higher weight values to positive classes. The training process 400 further includes identifying user-defined threshold 426. The user-defined threshold is compared with a score generated for each data element of the training dataset 402.”) causing, in response to determining that each of the positive response of the first user and the positive response of the second user has exceeded the positive response threshold, the second media content item to be presented at the first device and the second device prior to the predicted future interaction. (In ¶0060; Fig. 5: teaches “the activity log 500 includes a previous classification 510 that indicates a first degree of likelihood that the user will perform a subsequent user-device action in response to the follow-up interactive content… the previous classification 510 identifies that the user has been associated with a “known” stage, which indicates that the user is known to the content provider system. The activity log 500 further includes a new classification 512 that indicates a second degree of likelihood that the user will perform the subsequent user-device action in response to the follow-up interactive content.”) Regarding claims 10 and 20: Sinha discloses: generating, for display, a first media content item at a first device to be presented to a first user and at a second device to be presented to and a second user; [Fig. 1; ¶0031]: Content is generated and then transmitted to a client device to then be displayed on a client device for users to see. determining the predicted future interaction between the first user and the second user further comprises determining a plurality of predicted future interactions between the first user and the second user; and [¶0024]: The content provider system applies ML models to monitor user-activity and provide appropriate content. [¶0040-0041]: User-engagement is predicted in order to generate a categorical value. generating, for display, the second media content item to be presented at the first device and the second device prior to the predicted future interaction further comprises: [Fig. 1; ¶0031]: Content is generated and then transmitted to a client device to then be displayed on a client device for users to see. determining a time gap between each two successive predicted future interactions of the plurality of the predicted future interactions; (In ¶0072: teaches “For each time window, features are updated for the user. For example, a user performs its first user-device action in response to an interactive content at time window t_0. The user then performs a second user-device action that is indicative of a presentation of the follow-up interactive content will trigger a subsequent user-device action, in which the second user-device action is performed within the time window (t_0+2w, t_0+3w]. “) mapping each predicted future interaction to each time gap, preceding and adjacent to each predicted future interaction; (In ¶0072: teaches “For each time window, features are updated for the user. For example, a user performs its first user-device action in response to an interactive content at time window t_0. The user then performs a second user-device action that is indicative of a presentation of the follow-up interactive content will trigger a subsequent user-device action, in which the second user-device action is performed within the time window (t_0+2w, t_0+3w]. “) setting a presentation threshold related to the second media content item based on the lifetime of the first media content item; (In ¶0072; Fig. 6: teaches “As shown in FIG. 6, the time period (t-T, t) is divided into a set of time windows t_0, t_0+w, t_0+2w, t_0+3w . . . , in which “w” specifies the size of the time window. In some instances, the size of the time window is less than the time range assigned for label creation “LC.””) determining whether each time gap exceeds the presentation threshold; and (In ¶0073: teaches “the machine-learning model generates predicted output at multiple instances for a given user in the training dataset, in which each instance corresponds a time window at which the user is predicted to be non-responsive until the time period ends or until training reaches a time window in which the user is predicted to be responsive.”) causing the second media content item to be presented at the first device and the second device prior to each predicted future interaction, wherein the mapped time gap exceeds the presentation threshold. (In ¶0072; Fig. 2: teaches “For example, a user performs its first user-device action in response to an interactive content at time window t_0. The user then performs a second user-device action that is indicative of a presentation of the follow-up interactive content will trigger a subsequent user-device action, in which the second user-device action is performed within the time window (t_0+2w, t_0+3w].”) Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims are rejected under 35 U.S.C. 103 as being unpatentable over Sinha (U.S. Pub No. 20220366299 A1) in view of Lee (U.S. Pub No. 20140222580 A1). Regarding claims 5 and 15: Although Sinha discloses a network interface device comprising of a group of devices, Sinha does not explicitly disclose determining whether the first and second users each belonging to a social circle based on user information. However, Lee teaches: determining whether the first and second users each belong to a social circle based on the first and second user information; and (In ¶0028; Fig. 2: teaches “Data in the user profile 212 may be linked to data of other users to allow correlation with the data of other users.”) causing, in response to determining that the first and second users each belong to the social circle, the second media content item to be presented to each of the first and second users prior to the predicted future interaction. (In ¶0028; Fig. 2: teaches “The user may also provide input to the user profile 212 to identify the type of information the user wants to receive as suggestions. Thereafter, the profile analysis and recommendation engine 270 may generate data for populating the user's calendar. Additional data may be provided to the mobile device 210 of the user to allow the user to make queries regarding events associated with an identified artist in the user profile 212. Thus, certain information may already be loaded in the user profile 212 on a mobile device.”) It would have been obvious to one of ordinary skill in the art before the earliest effective filing date of the claimed invention to modify Sinha’s disclosed method of provisioning interactive content based on predicted user-engagement levels with determining whether the first and second users each belonging to a social circle based on the first and second user information, as taught by Lee, in order to “assist a user in identifying events and providing context to user regarding events that matches user preferences” (¶0005, Lee). Pertinent Art The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. Bhan (US20230325857 A1) is pertinent because it is directly related to “selectively engaging users in a network system thar create sentiment- based content.” Holtzclaw (US9369536 B1) is pertinent because it is directed to “generating a timeline of significant events associated with a user and of user behavior that is contextually relevant to the events are described herein.” Welinder (US20150180980 A1) is pertinent because it is related to “systems and methods for creating shared virtual spaces.” Singh (WO2020219245 A1) is pertinent because it is directly related to “enabling providing event suggestions based on input from a plurality of data sources including: user data including interests, travel modes and habits, calendar data including free/busy and location information associated therewith, map data including means for determining current and predicted traffic conditions and event data corresponding to a plurality of events from which recommendations are generated.” Chen (US20190222899 A1) is pertinent because it is directed to “the field of Internet technologies, and in particular, to a media content recommendation method, a server, a client, and a storage medium.” Lewis (US20240223527 A1) is pertinent because it is related to “social media sharing and more particularly to systems, methods and computer readable media that facilitate identifying and recommending content for sharing in a social setting based on personal user preferences and privacy settings.” Smarr (US10122791 B2) is pertinent because it is directly related to “computer software systems and methods, in particular, systems and methods for the creation and maintenance of social networks in social networking applications.” Koshy (US20240259634 A1) is pertinent because it is directed to “systems and methods for providing media content recommendations for various media content distribution systems, and more particularly providing media content recommendations based on user input to a reinforcement learning model.” Kalmes (US20140149326 A1) is pertinent because it is related to “the recommendation of media content items.” Shaw (EP3105928 A1) is pertinent because it is directly related to “delivering media content to an output device.” Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Bill Chen whose telephone number is (571)270-0660. The examiner can normally be reached Monday - Friday 8:30am - 5:00pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber can be reached on (571) 270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /BILL CHEN/Examiner, Art Unit 3626 /NATHAN C UBER/Supervisory Patent Examiner, Art Unit 3626
Read full office action

Prosecution Timeline

Jun 28, 2023
Application Filed
Jun 27, 2025
Non-Final Rejection — §101, §102, §103
Oct 01, 2025
Response Filed
Jan 22, 2026
Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
0%
Grant Probability
0%
With Interview (+0.0%)
3y 0m
Median Time to Grant
Moderate
PTA Risk
Based on 9 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month