Prosecution Insights
Last updated: April 19, 2026
Application No. 18/861,661

SYSTEM AND METHOD FOR RANKING RECOMMENDATIONS IN STREAMING PLATFORMS

Final Rejection §102§103
Filed
Oct 30, 2024
Examiner
DOSHI, AKSHAY
Art Unit
2422
Tech Center
2400 — Computer Networks
Assignee
Jio Platforms Limited
OA Round
2 (Final)
64%
Grant Probability
Moderate
3-4
OA Rounds
3y 3m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
171 granted / 268 resolved
+5.8% vs TC avg
Strong +39% interview lift
Without
With
+39.2%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
30 currently pending
Career history
298
Total Applications
across all art units

Statute-Specific Performance

§101
4.3%
-35.7% vs TC avg
§103
53.8%
+13.8% vs TC avg
§102
16.9%
-23.1% vs TC avg
§112
13.8%
-26.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 268 resolved cases

Office Action

§102 §103
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Status Claims 1-7, 9-18 are amended. Claim 8 is canceled. No newly added claims. Claims 1-7 and 8-18 are presented for examination. Response to Arguments Applicant's arguments filed in the amendment filed on 10/14/2025 have been fully considered but they are not persuasive, the reasons are set forth below. Applicant argues (Remarks page 10-11) that Kaya does not disclose, generating any form of matrix, let alone a time-sequenced multidimensional matrix representing user interaction data. Kaya operates on lists of "eligible shows," categories (current, trailing, library), and feature values, but there is no disclosure of transforming watch history and interaction data into a structured matrix representation. Moreover, Kaya uses rule-based filtering (categorization, effective time sorting) followed by a machine-learning predictor. This is fundamentally different from indexing and ranking a time-sequenced data matrix, which is a core aspect of the claimed architecture. In response, the examiner respectfully points out that broadest reasonable interpretation of term “matrix” is given as data represented in two dimensional array. Kaya in par. 0071 discloses, user behavior information collected as training data, such as watch behavior (timestamped) and click/load behavior of the watchlist, all the aforementioned features at a certain time point can be calculated using the history and video metadata at that point. For example, if a user's clicks on the watchlist tray are clicks C1, C2, . . . , Cn, and at times t1, t2, . . . , tn, correspondingly. For each click Ci, particular embodiments calculate the features at time ti (using the history H up until ti). Let s be the clicked show, and s′ any other show that is shown in the watchlist, but not clicked at time ti. A positive instance is generated for s, and negative instances are generated for each s′. Therefore, two dimensional data (i.e. matrix) is generated from using user watch history, containing list of shows with associated value as negative or positive based on whether it was clicked on or not clicked on. Hence, Kaya discloses generating multidimensional matrix. Further to that, user's clicks on the watchlist tray are clicks C1, C2, . . . , Cn, and at times t1, t2, . . . , tn, correspondingly, indicates that matrix was generated with or using user interaction data with time sequence. Here examiner points out that examiner is interpreting “generating a user matrix data with the time sequence” as generating a user matrix data with user interaction with time sequence. Current claim language does not claim, “a time-sequenced data matrix”, if as applicant is suggesting if it should be interpreted as “a time-sequenced data matrix”, then examiner suggest to claim it as explicitly. Therefore, In response to applicant's argument that the references fail to show certain features of the invention, it is noted that the features upon which applicant relies (i.e., “a time-sequenced data matrix”), are not recited in the rejected claims. Although the claims are interpreted in light of the specification, limitations from the specification are not read into the claims. See In re Van Geuns, 988 F.2d 1181, 26 USPQ2d 1057 (Fed. Cir. 1993). Applicant further argues (Remarks Page 10), “claim 1 requires the AI engine to "predict... a time-wise user preference in an offline mode based on the ranking of the user data matrix and a new content suggestion," followed by generating recommendations in online mode based on the optimized model and "subscribed user activity." Kaya lacks any teaching of offline vs. online modes or a temporal-sequence prediction mechanism tied to time-wise user preference. Kaya predicts only the order in which a user may watch shows, based on current availability and preference probabilities not time-wise preferences or future temporal sequences of consumption. Kaya also does not discuss offline model generation versus online recommendation serving. Its watch-list is dynamically updated but not based on distinctions between offline model training and online prediction. Kaya does not disclose or even suggest incorporating new content suggestions into any time-wise preference model. Hence, the claimed bifurcation of the system into offline prediction and online recommendation based on temporal user sequencing is entirely absent.” In response, examiner respectfully points out that, Kaya in par. 0074 discloses, if there is enough historical watch history data to determine that the user explicitly watched show A before show B (i.e. time-wise user preference, watching one show before other = preference of sequence in date/time wise) when they both had new episodes, then machine-learning predictor 506 should list show A before show B. Kaya in par. 0098 further discloses Machine-learning predictor 506 may also use explicit preference to determine the probability of a user watching a show may be higher than other shows, machine-learning predictor 506 may predict that the user clearly prefers show A over show B due to the explicit preference in the user's watch history. For example, show A airs on Wednesdays, and show B airs on Thursdays. The user logs into the video delivery system on Friday and watches show A before show B (i.e. time-wise user preference, watching one show before other = preference of sequence in date/time wise), machine-learning predictor 506 may predict that the user clearly prefers show A over show B due to the explicit preference in the user's watch history. Machine-learning predictor 506 may predict that the user has a higher preference for show A than show B when both shows have new episodes available. Par. 0106 further discloses, a model 704 is trained using the user's watch history. The training process derives samples from implicit user feedbacks, such as video views and loads/clicks on the watch list, and explicit user feedback, such as a user affirmatively selecting one show over another. The model 704 is then built to predict a user's affinity to shows. Hence based on above paragraphs, 0074, 0098, and 0106, Kaya clearly suggests model is incorporating user time wise preference having preference for watching one show over other shows in terms of date/time. Kaya in fig. 7A shows the machine learning predictor working suggesting that shows for user what would probably be selected by user, hence it is done as offline process and not as in real time process while suggestion is displayed to user. Applicant argues that (Remarks Page 10-11), “Amended independent claim 1 requires the AI engine to "predict... a time-wise user preference in an offline mode based on the ranking of the user data matrix and a new content suggestion," followed by generating recommendations in online mode based on the optimized model and "subscribed user activity." Kaya lacks any teaching of offline vs. online modes or a temporal-sequence prediction mechanism tied to time-wise user preference. Kaya predicts only the order in which a user may watch shows, based on current availability and preference probabilities not time-wise preferences or future temporal sequences of consumption. Kaya also does not discuss offline model generation versus online recommendation serving. Its watch-list is dynamically updated but not based on distinctions between offline model training and online.” In response, examiner respectfully points out that Kaya, par. 0098 discloses, example of predicting about future shows that yet to be available based on user having preference of watching one show before other show. Such in example, show A airs on Wednesdays, and show B airs on Thursdays. The user logs into the video delivery system on Friday and watches show A before show B, machine-learning predictor 506 may predict that the user clearly prefers show A over show B due to the explicit preference in the user's watch history. Machine-learning predictor 506 may predict that the user has a higher preference for show A than show B when both shows have new episodes available. Hence, Kaya discloses, user preferences about watching one show before other show therefore there is time difference between watching of show hence it is time wise user preferences. Since prediction is about future shows yet to be available, prediction is done based on modelling is ahead of the time or done offline (i.e. not while user is online or in real time). Kaya in par. 0109 further discloses, watch list generator 108 is integrated with video delivery system 106 to dynamically update user interfaces on client devices. FIG. 8 depicts an example of dynamically updating watch list 110 according to one embodiment. Watch list 110 must be updated dynamically to represent the current user's predicted order of watching shows. If a user watches an episode of a show and watch list 110 is not updated, then the predicted order in watch list 110 may not be accurate. Kaya in par. par. 0128 discloses, The delivery of video content by streaming may be accomplished under a variety of models. In one model, the user pays for the viewing of video programs, for example, using a fee for access to the library of media programs or a portion of restricted media programs, or using a pay-per-view service. Hence, Kaya discloses, once watchlist is displayed to user, watchlist (i.e. recommend content) is updated dynamically while user watches the specific show (i.e. user in online mode watching shows), user activity of watching content is performed under subscription model). Kaya discloses, predict the watch list with possible sequence of media that user may would like to watch in advance that is being considered as in offline mode. Once watch list generated based on user watching history model is displayed while user is online, watch list (i.e. recommend content) is updated dynamically while user watches the specific show (i.e. user in online mode watching shows), user activity of watching content is performed under subscription model. Applicant argues (Remarks page 11), Kaya does not disclose generating a user data matrix with a time sequence as required by amended independent claim 1. The cited passages (e.g., 71, 74) merely state that features at a particular time can be calculated from watch history or that the predictor can label selections as positive or negative. These are isolated feature calculations not the construction of a matrix with explicit time sequencing or a structured representation combining time-ordered watch data with interaction behavior. A matrix with a temporal dimension is a concrete data structure, while Kaya only uses individual feature vectors derived per event, not a persistent, indexed matrix containing a time-ordered series of events. Moreover, Kaya's "ranking" is simply comparing two shows with respect to newly available episodes, not ranking an indexed time-structured matrix using a stated "primary technique."Thus, neither the matrix creation nor matrix indexing/ranking required by the claim is taught or suggested. In response, the examiner respectfully points out that, applicant argument is based on specific definition of term “Matrix” without any specific definition provided in claim itself. In that case, broadcast reasonable interpretation of term “matrix” is given as data represented in two dimensional array. Kaya in par. 0071 discloses, user behavior information collected as training data, such as watch behavior (timestamped) and click/load behavior of the watchlist, all the aforementioned features at a certain time point can be calculated using the history and video metadata at that point. For example, if a user's clicks on the watchlist tray are clicks C1, C2, . . . , Cn, and at times t1, t2, . . . , tn, correspondingly. For each click Ci, particular embodiments calculate the features at time ti (using the history H up until ti). Let s be the clicked show, and s′ any other show that is shown in the watchlist, but not clicked at time ti. A positive instance is generated for s, and negative instances are generated for each s′. Therefore, Kaya indeed teaches, two dimensional data (i.e. matrix) is generated from using user watch history, containing list of shows with associated value as negative or positive based on whether it was clicked on or not clicked on. Hence, kaya discloses generating multidimensional matrix. Further to that, user's clicks on the watchlist tray are clicks C1, C2, . . . , Cn, and at times t1, t2, . . . , tn, correspondingly, indicates that matrix was generated with or using time sequence based user interaction data. Kaya in par. 0074 discloses, if show A and show B are both current show that user is caught up with (i.e. watch history) then if show A has new episode available and show B does not have new episode available then machine learning predictor ranks show A higher than show B, i.e. ranking/indexing the list of shows that user has watched based on user watching and interaction history. The term “primary technique” is interpreted as basic technique of ranking the data since no any specific definition for this term is provided in claim itself. Applicant argues (Remarks page 11-12), Kaya does not disclose, “the specific multi-stage optimization pipeline recited in the claim. Amended independent claim 1 requires:(1) generating a sparse implicit interaction dataset, (2) generating an initial distribution of data via a "primary technique," (3) indexing this distribution and scaling indices, and (4) reiterating the primary technique on the scaled indices to generate an optimized model. Kaya 71 merely describes producing individual positive/negative examples for training and it does not construct or maintain a sparse implicit dataset in the sense of a matrix with predominantly missing interactions, which is a recognized data structure in recommender systems. Likewise, Kaya does not disclose any "initial distribution of data," nor does it teach generating indices, scaling those indices, or reapplying a primary technique on scaled indices. Training a single model one time using implicit and explicit feedback (72, 106) is not the same as the iterative scaling/optimization loop expressly recited. Therefore, Kaya's single-pass model training is different from the multi-step iterative optimization process required by amended independent claim 1.” In response, the examiner respectfully points out that, claimed steps 1-4 mentioned above to generate the model is similar to the algorithmic model training disclosed in Kaya, par. 0071-0072, 0106. Kaya in par. 0071-0072 discloses separating (i.e. generating) based on implicit user feedbacks (i.e. interaction) showing on which show user did not click on and on which user clicked on, user didn’t click on show = sparce or missing user interaction. Generating positive and negative instance based on user clicked on show or not clicked on show = generating initial distribution of viewing data based on user implicit behavior of clicking on certain show and not clicking on certain shows. Shows shown but not clicked are labeled as negative and shows clicked labeled as positive, labeling the data with negative or positive = indexing the data as positive or negative to generate indices from mathematical calculation with comparable score (i.e. positive or negative) = scaling or simplifying the indexed data. Par. 0071-0072 discloses model trained with an algorithm, such as C4.5, which is a supervised machine learning method for training data, it constructs a tree model by recursively (i.e. reapplying the technique repeatedly) splitting data based on attributes that provide the highest normalized information gain. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 7, 9, 11, 17, and 18 are rejected under 35 U.S.C. 102(a)(1) and/or 102(a)(2) as being anticipated by Kaya et al. (US 20150365729). Regarding claim 1, Kaya discloses, A system for generating personalized recommendations based on user preferences (Par. 0021-0022, a system to generate a personalized watch list, analyze historical user behavior with respect to the timing for recurring releases of the episodes for shows to determine the order of the shows in the watch list), the system comprising: a processor; and a memory operatively coupled with the processor, wherein said memory stores instructions, which when executed by the processor (Par. 0129, fig. 10 an apparatus 1000 for viewing video content and advertisements is illustrated. In selected embodiments, the apparatus 1000 may include a processor (CPU) 1002 operatively coupled to a processor memory 1004, which holds binary-coded functional modules for execution by the processor 1002), cause the processor to: receive a user parameter from one or more users via a computing device , wherein the one or more users operate the computing device and are connected to the processor via a network (Par. 0024, fig. 1, a video delivery system 106 (i.e. server/processor) to provide videos on-demand to users using client devices 104 (i.e. computing device being used by user), and wherein the user parameter is based on a user watch history and a user interaction data with a time sequence (Par. 0025, a media player may send signals to video delivery system 106 as to what shows and episodes are requested/watched, video delivery system 106 record a user's watch history when a user watches the episodes of shows. For example, a media player may send signals to video delivery system 106 as to what shows and episodes are requested/watched. i.e. video delivery system receives user viewing history which includes user activity (interaction) of selecting shows and episodes (shows and episodes = time sequence as shows as start and end time); generate a user data matrix with the time sequence based on the user watch history and the user interaction data (Par. 0071, all the aforementioned features at a certain time point can be calculated using the history and video metadata at that point, using history calculate which show user clicked watch list and which show user didn’t click on and label as positive or negative accordingly, i.e. generating user data matrix with shows user selected based on user history and user interaction); index the generated user data matrix and rank, via a primary technique, the user data matrix based on the user watch history and the user interaction data (Par. 0074, if show A and show B are both current show that user is caught up with (i.e. watch history) then if show A has new episode available and show B does not have new episode available then machine learning predictor ranks show A higher than show B, i.e. rank the viewed shows based on watch history and generated matrix); predict, via an artificial intelligence (AI) engine, a time-wise user preference in an offline mode based on the ranking of the user data matrix and new content suggestion (Par. 0098, Machine-learning (i.e. machine learning = using computer intelligence to perform particular task by learning behavior) predictor 506 may also use explicit preference to determine the probability of a user watching a show may be higher than other shows, machine-learning predictor 506 may predict that the user clearly prefers show A over show B due to the explicit preference in the user's watch history. Machine-learning predictor 506 may predict that the user has a higher preference for show A than show B when both shows have new episodes available, Fig. 7A shows the machine learning predictor working suggest shows for user what would probably be selected by user, hence it is done as offline process and not as in real time process while suggestion is displayed to user); generate an optimized model based on the time-wise user preference (Par. 0106, Machine-learning predictor 506 fits a function ƒ(u, s) 702, given a user's watch history H and show S, and returns a value within a range such that ƒ(u, s) is consistent with the samples as much as possible. For example, a model 704 is trained using the user's watch history. The training process derives samples from implicit user feedbacks, such as video views and loads/clicks on the watch list, and explicit user feedback, such as a user affirmatively selecting one show over another. The model 704 is then built to predict a user's affinity to shows), wherein the processor is to generate the optimized model by being configured to: generate a sparse implicit interaction dataset based on the user interaction data and the user watch history (Par. 0071, user behavior information collected as training data, such as watch behavior (timestamped) and click/load behavior of the watchlist. For the computation of such a function, ground truths or training data are prepared from implicit user feedbacks: video views, loads/clicks on the personalized list of shows. For example, if a user's clicks on the watchlist tray are clicks C1, C2, . . . , Cn, and at times t1, t2, . . . , tn, correspondingly. For each click Ci, particular embodiments calculate the features at time ti (using the history H up until ti). Let s be the clicked show, and s′ any other show that is shown in the watchlist, but not clicked at time ti, i.e. separating (i.e. generating) based on implicit user feedbacks (i.e. interaction) showing on which show user did not click on and on which user clicked on, user didn’t click on show = sparce or missing user interaction); generate, via the primary technique, an initial distribution of data based on the sparse implicit interaction dataset (Par. 0071, A positive instance is generated for s, and negative instances are generated for each s′. Specifically, the machine learning predictor wants the function ƒ(u, s)=1, interpreted as a user with a history H, will click on show s, while the shows shown but not clicked are labeled as negative, that is for each such a show s′, machine-learning predictor 506 would like ƒ(u, s′)=0. In this manner, multiple instances are generated from a user's history, i.e. generating positive and negative instance = generating initial distribution of viewing data based on user implicit behavior of not clicking on certain show and not clicking on certain shows); index the initial distribution of data to generate one or more indices and scale the one or more indices (Par. 0071, A positive instance is generated for s, and negative instances are generated for each s′. Specifically, the machine learning predictor wants the function ƒ(u, s)=1, interpreted as a user with a history H, will click on show s, while the shows shown but not clicked are labeled as negative, that is for each such a show s′, machine-learning predictor 506 would like ƒ(u, s′)=0. In this manner, multiple instances are generated from a user's history, i.e. labeling the data with negative or positive = indexing the data as positive or negative to generate indices from mathematical calculation); and reiterate the primary technique on the scaled one or more indices and generate the optimized model (Par. 0072, Par. 0106, a model 704 is trained using the user's watch history. The training process derives samples from implicit user feedbacks, such as video views and loads/clicks on the watch list, and explicit user feedback, such as a user affirmatively selecting one show over another. The model 704 is then built to predict a user's affinity to shows); and recommend a time-wise user sequence in an online mode based on the generated optimized model and a subscribed user activity (Par. 0109, watch list generator 108 is integrated with video delivery system 106 to dynamically update user interfaces on client devices. FIG. 8 depicts an example of dynamically updating watch list 110 according to one embodiment. Watch list 110 must be updated dynamically to represent the current user's predicted order of watching shows. If watch list 110 is not dynamically updated, then watch all button 112 will not work correctly. This is because watch all button 112 is configured to play all unseen episodes of shows in the order of watch list 110. If a user watches an episode of a show and watch list 110 is not updated, then the predicted order in watch list 110 may not be accurate. Par. 0128, The delivery of video content by streaming may be accomplished under a variety of models. In one model, the user pays for the viewing of video programs, for example, using a fee for access to the library of media programs or a portion of restricted media programs, or using a pay-per-view service, i.e. watchlist (i.e. recommend content) is updated dynamically while user watches the specific show (i.e. user in online mode watching shows), user activity of watching content is performed under subscription model). Regarding clam 7, The system as claimed in claim 1, Kaya further discloses, wherein the subscribed user activity comprises one or more user session data in the online mode (Par. 0128, The delivery of video content by streaming may be accomplished under a variety of models. In one model, the user pays for the viewing of video programs, for example, using a fee for access to the library of media programs or a portion of restricted media programs, or using a pay-per-view service, i.e. watchlist (i.e. recommend content) is updated dynamically while user watches the specific show (i.e. user in online mode watching shows), user activity of watching content is performed under subscription model). Regarding claim 9, The system as claimed in claim 8, Kaya further discloses, wherein the scaled one or more indices comprise temporal information associated with the user watch history, the user interaction data, and the subscribed user activity (Par. 0071, A positive instance is generated for s, and negative instances are generated for each s′. Specifically, the machine learning predictor wants the function ƒ(u, s)=1, interpreted as a user with a history H, will click on show s, while the shows shown but not clicked are labeled as negative, that is for each such a show s′, machine-learning predictor 506 would like ƒ(u, s′)=0. In this manner, multiple instances are generated from a user's history, i.e. labeling the data with negative or positive = indexing the data as positive or negative to generate indices from mathematical calculation. Information is associated with user history H, shows (i.e. show as time associated therefore it has temporal information) user clicked on (i.e. interaction data). Par. 0128 discloses, The delivery of video content by streaming may be accomplished under a variety of models. In one model, the user pays for the viewing of video programs, for example, using a fee for access to the library of media programs or a portion of restricted media programs, or using a pay-per-view service, i.e. history of data includes the subscribed user activity). Regarding claims 11, Kaya meets the claim limitations as set forth in claim 1, respectively, Kaya further discloses, receiving, by a processor associated with a system, a user parameter from one or more users (Par. 0129, fig. 10). Regarding claims 17, Kaya meets the claim limitations as set forth in claim 1, respectively, Kaya further discloses, A user equipment (UE) for receiving personalized recommendations, the UE comprising (Par. 0026, watch list generator 108 uses a user's watch history to determine a personalized order for the watch list. The order reflects a user's personalized viewing habits and preferences based on episode availability and timing information for when episodes are released. Watch list generator 108 may then provide a personalized watch list 110 to each user. When a user uses the video delivery service, a client 104 may display watch list 110 in an interface ): one or more processors communicatively coupled to a processor associated with a system, wherein the one or more processors are coupled with a memory, and wherein said memory stores instructions, which when executed by the one or more processors, cause the one or more processors to: transmit a user parameter to the processor via a network (Par. 0025, a media player may send signals to video delivery system 106 as to what shows and episodes are requested/watched, video delivery system 106 record a user's watch history when a user watches the episodes of shows, see Par. 0129, fig. 10). Regarding claims 18, Kaya meets the claim limitations as set forth in claim 1, respectively, Kaya further discloses, a non-transitory computer readable medium comprising a processor with executable instructions, causing the processor to (Par. 0129, fig. 10). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 2, 4 and 12 are rejected under U.S.C. 103 as being unpatentable over Kaya et al. (US 20150365729), in view of Gurbanov et al. (US 20230036330). Regarding claim 2, The system as claimed in claim 1, Kaya does not disclose, wherein the processor is to index the generated user data matrix using an approximate nearest neighbors oh yeah (Annoy) technique. Gurbanov discloses, wherein the processor is to index the generated user data matrix using an approximate nearest neighbors oh yeah (Annoy) technique (Par. 0072, Methods of determining similar users, as well as methods of recommending content based on similarity, Par. 0111-0112, a method that is based on correlations between users and on the past behavior of similar users, where there are large numbers of users and items the computation of similarity and the computation of predicted responses can be computationally expensive and therefore slow. Therefore, where such an approach is used, the set of neighbours for each user is typically pre-computed offline and/or found using approximate nearest neighbour methods. An exemplary nearest neighbour method can be implemented using the C++ library Approximate Nearest Neighbours Oh Yeah (Annoy)). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Kaya by teaching of index the generated user data matrix using an approximate nearest neighbors oh yeah (Annoy) technique, as taught by Gurbanov, to improve computation responses and reduce computational expense for prediction, as disclosed in Gurbanov par. 0111. Regarding claim 4, The system as claimed in claim 1, Kaya does not disclose, wherein the new content suggestion is based on at least one of: an age group, a region, and a country associated with the one or more users. Gurbanov discloses, wherein the new content suggestion is based on at least one of: an age group, a region, and a country associated with the one or more users (Par. 0085, the processor 230 may provide a recommended content that is most suitable for the user based on the gender and age of the user). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Kaya by teaching of the new content suggestion is based on at least one of: an age group, a region, and a country associated with the one or more users, as taught by Gurbanov, to improve user engagement by recommending media content that is of user’s interest to create positive user’s experience, as disclosed in Gurbanov par. 0002-0003. Regarding claim 12, Kaya in view of Gurbanov meets the claim limitations as set forth in claim 2. Claims 3 and 13 are rejected under U.S.C. 103 as being unpatentable over Kaya et al. (US 20150365729), in view of Resheff et al. (US 11494701). Regarding claim 3, The system as claimed in claim 1, Kaya does not disclose, wherein the primary technique comprises a Bayesian Personalized Ranking (BPR) technique. Resheff discloses, wherein the primary technique comprises a Bayesian Personalized Ranking (BPR) technique (Col. 12, line 18-25, FIG. 5 shows an example system (500) in accordance with one or more embodiments. As shown in FIG. 5, records (504), including user records and item records are stored in database (502). One or more records, such as record (506) having user identifiers and item identifiers, are transmitted to the recommender machine learning model (508). The recommender machine learning model (508) may apply a Bayesian personalized ranking model (510) to create a user matrix (512) and item matrix (514)). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Kaya by teaching of the primary technique comprises a Bayesian Personalized Ranking (BPR) technique, as taught by Resheff, to user personalized ranking model to that provides affinity between users and items, as disclosed in Resheff Col. 12, line 18-28. Regarding claim 13, Kaya in view of Resheff meets the claim limitations as set forth in claim 3. Claims 5 and 14 are rejected under U.S.C. 103 as being unpatentable over Kaya et al. (US 20150365729), in view of Lopatecki et al. (US 20200107070). Regarding claim 5, The system as claimed in claim 1, Kaya does not disclose, wherein the processor is to predict the time-wise user preference and generate the optimized model using a long short-term memory (LSTM) technique. Lopatecki discloses, wherein the processor is to predict the time-wise user preference and generate the optimized model using a long short-term memory (LSTM) technique (Par. 0039, the multi-RNN prediction system 102 can train a plurality of LSTM neural networks for a plurality of users to generate a plurality of media consumption predictions for the users based on historical media consumption data maintained by the media analytics system 118. Specifically, the multi-RNN prediction system 102 uses the LSTM neural networks to learn media consumption habits, trends, and preferences of the users for generating predictions for a variety of target audiences including different combinations of the users). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Kaya by teaching of predict the time-wise user preference and generate the optimized model using a long short-term memory (LSTM) technique, as taught by Lopatecki, to improves efficiency by reducing the time and resources needed to train neural networks to generate media consumption predictions for a target audience that includes a large number of people, as disclosed in Lopatecki Par. 0025. Regarding claim 14, Kaya in view of Lopatecki meets the claim limitations as set forth in claim 5. Claims 6 and 15 are rejected under U.S.C. 103 as being unpatentable over Kaya et al. (US 20150365729), in view of Hu et al. (US 10896679). Regarding claim 6, The system as claimed in claim 1, Kaya does not disclose, wherein the processor is to recommend the time-wise user sequence using a cosine similarity technique. Hu discloses, wherein the processor is to recommend the time-wise user sequence using a cosine similarity technique (Col. 16, line 44-55, generate an output recommending one or more items of content for output (e.g., ranked list 138). In various examples, after training the model and determining feature vector embeddings for various input features, similarities and/or commonalities among feature embeddings may be determined using techniques such as cosine similarity, correlation, Euclidean distance, etc. A recommended action (e.g., content to display and/or output) may be content that is associated with the most similar features, as determined using the aforementioned techniques). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Kaya by teaching of recommend the time-wise user sequence using a cosine similarity technique, as taught by Hu, to determine similarities and/or commonalities among feature embeddings in input data of predictor model, such as context data, user data may be determined using this techniques, as disclosed in Hu Col. 16, line 44-55. Regarding claim 15, Kaya in view of Hu meets the claim limitations as set forth in claim 6. Claims 10 and 16 are rejected under U.S.C. 103 as being unpatentable over Kaya et al. (US 20150365729), in view of Shang et al. (US 20170228810). Regarding claim 10, The system as claimed in claim 1, Kaya does not disclose, wherein the processor is to generate the user data matrix using at least one of: a content-based filtering technique and a collaborative filtering technique. Shang discloses, wherein the processor is to generate the user data matrix using at least one of: a content-based filtering technique and a collaborative filtering technique (Par. 0039, item-based collaborative filtering technique to analyze a user-item matrix (e.g., retrieved from user's transactional history, user's ratings, etc.) to identify relationships among items. In some examples the data for the matrix may be retrieved from a user's profile information, transaction history of the user, purchase data, rating, viewing data). Therefore, it would have been obvious to one of ordinary skill in the art at the time of the invention to modify Kaya by teaching of generate the user data matrix using at least one of: a content-based filtering technique and a collaborative filtering technique, as taught by Shang, to automatically filter content items that are of potential interest to users, as disclosed in Shang, par. 0001 and 0039. Regarding claim 16, Kaya in view of Shang meets the claim limitations as set forth in claim 10. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to AKSHAY DOSHI whose telephone number is (571)272-2736. The examiner can normally be reached M-F 9:30 AM to 6:00 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JOHN W MILLER can be reached at (571)272-7353. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.D./Examiner, Art Unit 2422 /JOHN W MILLER/Supervisory Patent Examiner, Art Unit 2422
Read full office action

Prosecution Timeline

Oct 30, 2024
Application Filed
Sep 25, 2025
Non-Final Rejection — §102, §103
Dec 29, 2025
Response Filed
Feb 21, 2026
Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12568270
ELEMENT DISPLAY METHOD AND APPARATUS, ELEMENT SELECTION METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 03, 2026
Patent 12568255
METHODS AND APPARATUS FOR IDENTIFYING MEDIA CONTENT USING TEMPORAL SIGNAL CHARACTERISTICS
2y 5m to grant Granted Mar 03, 2026
Patent 12563264
TECHNIQUES FOR REUSING PORTIONS OF ENCODED ORIGINAL VIDEOS WHEN ENCODING LOCALIZED VIDEOS
2y 5m to grant Granted Feb 24, 2026
Patent 12549810
INFORMATION PROCESSING APPARATUS, CONTROL METHOD OF INFORMATION PROCESSING APPARATUS, NON-TRANSITORY COMPUTER READABLE MEDIUM, AND SYSTEM
2y 5m to grant Granted Feb 10, 2026
Patent 12500841
DEVICE, METHOD AND PROGRAM FOR COMPUTER AND SYSTEM FOR DISTRIBUTING CONTENT BASED ON THE QUALITY OF EXPERIENCE
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+39.2%)
3y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 268 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month