Notice of Pre-AIA or AIA Status
1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
2. A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office Action has been withdrawn pursuant to 37 CFR 1.114.
Detailed Action
3. This Non-Final Office Action is responsive to Applicants’ submission dated 2/26/26. Claims 1-20 remain pending, of which claims 1, 11, and 16 are independent.
Claim Rejections - 35 USC § 103
4. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
5. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office Action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
6. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
7. Claims 1-2 and 4 are rejected under 35 U.S.C. 103 as being unpatentable over Non-Patent Literature “Sampling-Bias-Corrected Neural Modeling for Large Corpus Item Recommendations” (“Yi”) in view of Non-Patent Literature “Weighted Similarity and Core-User-Core-User-Item Based Recommendations” (“Zhang”).
Regarding claim 1, YI teaches A computer-implemented method to train a machine-learning model to recommend virtual experiences to a user (“YouTube neural retrieval model”, as discussed in sections 5 and 5.1 on page 5 and as illustrated via Figure 2 on page 6, where sections 5 and 5.1-5.3 detail training of the model based on video and user features, where the model per section 5 is understood to recommend videos (which is equivalent to the recited “virtual experiences” as clarified by Applicants’ per specification [0048]): Section 5 begins “We apply the proposed modeling framework and scale it to build a large scale neural retrieval system for one particular product in YouTube. This product generates video recommendations conditioned on a video (called seed video) being watched by a user ...”), the method comprising:
receiving training data that includes pairs of users and virtual experiences (section 5.1 on page 5, discussing video and user features respectively as used in building and training the model, where video content as recommended per the YouTube framework is equivalent to the recited “virtual experiences” (e.g., in view of Applicants’ specification [0048])), wherein each user of a pair is associated with user features (section 5.1’s 4th full paragraph, as found on page 5), each virtual experience of the pair is associated with item features (section 5.1’s 3rd full paragraph, as found on page 5), and each pair includes a virtual experience that a corresponding user interacted with (section 3, as found on page 3, top of right column, discussing “The goal is to learn model parameter θ from a training dataset of T examples, denoted by T := {(xi,yi,ri)}T i=1, where (xi,yi) denotes the pair of query xi and item yi , and ri ∈ R is the associated reward for each pair”, where the query as mentioned here is understood to encompass the user and seed video features (see page 5, section 5.1’s 4th full paragraph), where the item as mentioned here corresponds to embeddings/features for videos (i.e., “virtual experience” as recited));
training a user tower of the machine-learning model by: generating first feature embeddings based on the user features in the training data; and training a first deep neural network (DNN) to output user embeddings based on the first feature embeddings; and training an item tower of the machine-learning model by: generating second feature embeddings based on the item features in the training data; and training a second DNN to output item embeddings based on the second feature embeddings; (two-tower DNN model that separately learns query and video, per Figure 1 as shown on page 2 and as described in page 2’s left column: “Figure 1 provides an illustration of the two-tower model architecture where left and right towers encode {user, context} and {item} respectively”, where training of the model is further clarified per section 3, as found on page 3, top of right column, discussing “The goal is to learn model parameter θ from a training dataset of T examples, denoted by T := {(xi,yi,ri)}T i=1, where (xi,yi) denotes the pair of query xi and item yi , and ri ∈ R is the associated reward for each pair” and via Algorithm 1 as shown at the top of page 4’s left column).
Applicants’ claim further recites the additional limitations (which Yi does not entirely teach) for generating, based on the user tower and the item tower, a user-experience-experience graph that is formed by:
generating first edges between user nodes and virtual experience nodes based on users associated with the user nodes interacting with the virtual experiences associated with the virtual experience nodes;
generating second edges between corresponding virtual experience nodes based on one or more users interacting with both of two virtual experiences associated with corresponding virtual experience nodes, wherein a weight associated with each of the second edges is based on a number of user actions that are a same type that are performed by the one or more users interacting with both of the two virtual experiences;
identifying one or more virtual experiences associated with one or more corresponding virtual experience nodes in the user-experience-experience graph with limited user engagement; and
generating one or more item clusters from the one or more virtual experiences with limited user engagement and corresponding item embeddings based on distance in embedding space between the one or more virtual experiences with limited user engagement and the corresponding item embeddings;
wherein the user-experience-experience graph is able to generate one or more candidate virtual experiences for a user with limited user engagement in response to receiving user features associated with the user.
At best, Yi teaches mapping of features/embeddings for both users and items in the form of vectors in a multi-dimensional space, see e.g., section 3’s first paragraph, found on page 3’s left column but also as discussed just above in the mappings to Yi. In view of this discussion, as extended to Yi’s section 3 on its page 3, at the top of the right column, Yi does appear to teach associating a reward for a pair of user and items, which the Examiner is equating with Applicants’ limitation for “generating first edges between user nodes and virtual experience nodes based on users associated with the user nodes interacting with the virtual experiences associated with the virtual experience nodes”, although it is unclear if this is taught in the context of generating a user-experience-experience graph as is recited, or just a user-experience graph for example.
To teach these additional limitations that Yi lacks, the Examiner then further relies upon ZHANG:
Zhang teaches a comparable recommendation system that graphs users and items separately to find similarities between users and similarities between items. See, e.g., the last paragraph on its page 15. The Examiner notes this as a latter step, which builds upon previous determinations of user and item correlation (e.g., per page 6’s Figure 1 and also the bottom paragraph found on page 25, which the Examiner believes is similar to Yi as discussed immediately above in relation to Applicants’ limitation for generating first edges and likewise apt to teach that same limitation).
Based on the determination of user and item correlation (e.g., Figure 1 as mentioned just above), Zhang teaches examples of non-weighted and weighted item similarity calculations between a pair of items. See, e.g., equations 2 and 5 as discussed on page 7. These are examples of item similarity based on separate users’ engagement with the respective different items, and in the weighted instance, the engagement has a specific type (e.g., a measurement of how much the items are liked). The Examiner equates this with Applicants’ general recitation for the generation of a user-experience-experience graph specifically in part by generating second edges between corresponding virtual experience nodes based on one or more users interacting with both of two virtual experiences associated with corresponding virtual experience nodes, wherein a weight associated with each of the second edges is based on a number of user actions that are a same type that are performed by the one or more users interacting with both of the two virtual experiences. As was noted at the bottom of page 15, item-item similarity is captured as edges in a sub-graph, and this graphing is equated by the Examiner with Applicants’ U-E-E graph type.
Moreover, this approach is understood by the Examiner to recommend an item to a user, e.g., as discussed in section 3.1 at the bottom of page 12, thereby reading on the limitation for identifying one or more virtual experiences associated with one or more corresponding virtual experience nodes in the user-experience-experience graph with limited user engagement. The Examiner notes the limitation of applying Zhang’s approach in isolation to a cold-start problem where there is explicitly “no behavior or no purchase recordings available”, as Zhang notes in the last paragraph at the bottom of its page 25. That said, the Examiner understands Applicants’ claims to be materially differentiable from that caveat: (i) Applicants’ claim is explicitly directed to limited engagement and (ii) Applicants’ published specification at [0066] for example details examples of which there is engagement constituting limited engagement which exceeds Zhang’s zero-engagement caveat.
Zhang’s determination of item similarity, as discussed above, explicitly involves clustering and the related distance considerations. See, e.g., Zhang’s section 3.2 (beginning on page 15) and especially the bottom paragraph found on Zhang’s page 15. Hence, with this understanding and in combination with the other related teachings discussed here, the Examiner believes Zhang sufficient to teach the further limitations for generating one or more item clusters from the one or more virtual experiences with limited user engagement and corresponding item embeddings based on distance in embedding space between the one or more virtual experiences with limited user engagement and the corresponding item embeddings and wherein the user-experience-experience graph is able to generate one or more candidate virtual experiences for a user with limited user engagement in response to receiving user features associated with the user.
Yi and Zhang teach comparable user-item recommendation systems that graph users and items to recommend items to users. Hence, they are similarly directed and therefore analogous. It would have been obvious to incorporate Zhang’s particular computation-based similarity determinations to make item recommendations into Yi’s framework to provide more accurate recommendations as typically found in the state of the art, as found on pages 25-26 of Zhang.
Regarding claim 2, Yi in view of Zhang teach the method of claim 1, as discussed above. The aforementioned references further teach the additional limitation wherein the user actions include interacting with the virtual experiences associated with the virtual experience nodes, spending money while interacting with the virtual experiences associated with the virtual experience nodes, or a duration of interacting with the virtual experiences associated with the virtual experience nodes (Yi’s page 5, right column, discussing Training Labels, it is expressed that a user’s engagement is subject to analysis and embedding/featurizing (i.e., “affinity” as recited): “Video clicks are used as positive labels. In addition, for each click, we construct a reward ri to reflect different degrees of user engagement with the video. For example, ri = 0 for clicked videos with little watch time. On the other hand, ri = 1 indicates the whole video got watched. The reward is used as example weight as shown in Equation (4).”, and see also Zhang’s Figure 1 teaching a similar user-item affinity notion that is based on user engagement/interaction). The motivation for combining the references is as discussed above in relation to claim 1.
Regarding claim 4, Yi in view of Zhang teach the method of claim 1, as discussed above. The aforementioned references teach the additional limitation wherein the first edges between the user nodes and the virtual experience nodes are based on user affinity (Yi teaches mapping of features/embeddings for both users and items in the form of vectors in a multi-dimensional space, see e.g., section 3’s first paragraph, found on page 3’s left column, and see also Zhang’s Figure 1 teaching a similar user-item affinity notion that is amenable to graphing). The motivation for combining the references is as discussed above in relation to claim 1.
8. Claims 3, 5 and 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Yi in view of Zhang and further in view of CN 114996561 A (“Xu”).
Regarding claim 3, Yi in view of Zhang teaches the method of claim 1, as discussed above. The aforementioned references do not teach the additional limitations further comprising determining a predicted traversal of the user-experience-experience graph using a random walk algorithm or a Personalized PageRank algorithm. The Examiner relies upon XU to teach what Yi etc. lack, see e.g., paragraph beginning at the bottom of Xu’s page 14: “... a sequence diagram of the interactive media account generated by the random walk provided by the embodiment of the application embodiment, wherein 701 shows the sequence of the interactive media account number 1, 702 is the sequence of interactive media account number 2, 703 is the sequence of interactive media account number 3. The embedded representation of each interactive media account associated with the user can be obtained based on the sequence of each interactive media account, and the embedded representation is used as the user graphic feature.”
Like Yi, Xu relates to a two/double-tower model to recommend media/video to a user based on characteristics/features of both the user and the media/video. Hence, the references mentioned here are strongly analogous as being directed to the same subject matter. It would have been obvious to incorporate Xu’s use of a graph to better / more concretely model a user’s engagement with media/video as a function of time, with Yi’s framework and with a reasonable expectation of success, such as to improve upon Yi’s consideration of modeling a user’s streaming as a function of time (Yi’s page 2, left column, 1st full paragraph: “In contrast to MLP model where the output item vocabulary is stationary, we target the streaming data situation with vocabulary and distribution changes over time.”, and Yi’s page 5, right column, section 5.1’s 4th full paragraph: “One example is a sequence of k video ids the user recently watched.”).
Regarding claim 5, Yi in view of Zhang teach the method of claim 1, as discussed above. The aforementioned references Yi etc. teach the additional limitations further comprising wherein the machine-learning model is a first machine-learning model (Yi’s Figure 1: two-tower DNN model, as discussed above per claim 1) but not the further limitations further comprising: training a second machine-learning model to rank a subset of the one or more candidate virtual experiences to recommend to the user, wherein training the second machine-learning model is based on the one or more item clusters. Rather, the Examiner relies upon Xu to teach what Yi etc. otherwise lack, see e.g., Xu’s “recall module” which determines similarity among already graphed information, per Xu’s page 3, second full paragraph, which operates on information that is derived from immediate/direct machine-learning approaches/techniques as already discussed per claim 1, and hence as a next-step characterization for similarity, it can be understood to be a separate model that is also built on machine learning, and where similarity as determinable by the recall module is a basis for ranking per page 11’s 7th full paragraph: “... obtaining the candidate media account vector of similarity ranking, can be obtained by the number or proportion of ranking precedence, for example, obtaining 50 candidate media account vector ranking front, or obtaining the candidate media account vector accounting for 2 % of the total number of all media account vectors before being ranked. in the step 104, generating the recommendation information based on a plurality of recall media account number.”
Like Yi, Xu relates to a two/double-tower model to recommend media/video to a user based on characteristics/features of both the user and the media/video. Hence, the references mentioned here are strongly analogous as being directed to the same subject matter. It would have been obvious to incorporate Xu’s use of a graph to better / more concretely model a user’s engagement with media/video as a function of time, with Yi’s framework and with a reasonable expectation of success, such as to improve upon Yi’s consideration of modeling a user’s streaming as a function of time (Yi’s page 2, left column, 1st full paragraph: “In contrast to MLP model where the output item vocabulary is stationary, we target the streaming data situation with vocabulary and distribution changes over time.”, and Yi’s page 5, right column, section 5.1’s 4th full paragraph: “One example is a sequence of k video ids the user recently watched.”).
Regarding claim 16, the Examiner understands the present claim to essentially constitute the scope of claim 5 as discussed above, with the additional caveat of taking the model trained per claim 5 and applying/using it to perform inference. Hence, the Examiner rejects the present claim based on the same rationale given above per claim 5, with the additional reasoning provided here that it would be obvious to apply the model being trained per claim 5 once it is trained.
Regarding claim 17, Yi in view of Zhang and further in view of Xu teach the recommendation system of claim 16, as discussed above. The aforementioned references teach the additional limitations wherein the operations further include: receiving a query that includes the user features (Yi’s page 1, second column, 2nd full paragraph discussing “Given a triplet of {user,context,item}, a common solution to build a scalable retrieval model is: 1) learn query and item representations for {user,context } and {item} respectively”, from which it is understood that the framework’s notion of a query is inclusive of representations of a user and a context) and generating user vectors based on the user features, wherein outputting the candidate virtual experiences includes performing a nearest-neighbor search of the user vectors to the item clusters (Yi’s page 4, left column, discussing “Nearest Neighbor Search: Once the embedding functions u,v are learned, inference consists of two steps: 1) computing query embedding u(x, θ); 2) performing nearest neighbor search over a set of item embeddings that are pre-computed from embedding function v. Moreover, our modeling framework offers the option to choose an arbitrary set of items to serve at inference time. Instead of computing the dot product over all items to surface top items, low-latency retrieval is commonly based on a highly efficient similarity search system built on hashing techniques, e.g., [2, 10, 25], for approximate maximum inner product search (MIPS) problems. Specifically, compact representations of high dimensional embeddings are built through quantization [20] and end-to-end learning of coarse and product quantizers [36].”, and it would be obvious to extend computation to be inclusive of item clusters and nearest neighbor considerations as Zhang teaches in its section 3). The motivation for combining the references is as discussed above in relation to claim 16.
Regarding claim 18, Yi in view of Zhang and further in view of Xu teach the recommendation system of claim 17, as discussed above. The aforementioned references teach the additional limitations wherein the item clusters include the one or more virtual experiences with limited user engagement and corresponding item embeddings based on similarity between the one or more virtual experiences with limited user engagement and the corresponding item embeddings (Yi and Zhang as discussed above per claim 16, as related to the present claim’s similarity-based item clustering aspect (e.g., specifically see Zhang’s section 3), and the modified framework in view of the cited art is amenable to application in limited engagement scenarios for reasons discussed above in relation to claim 1 (i.e., Zhang in particular contemplating applicability of its approach to cold-start scenarios just short of no-information scenarios, as discussed per its section 4.6.2’s last paragraph on page 25)). The motivation for combining the references is as discussed above in relation to claim 16.
Regarding claim 19, Yi in view of Zhang and further in view of Xu teach the recommendation system of claim 16, as discussed above. The aforementioned references teach the additional limitations wherein the operations further include: receiving a query that includes the user features (Yi’s page 1, second column, 2nd full paragraph discussing “Given a triplet of {user,context,item}, a common solution to build a scalable retrieval model is: 1) learn query and item representations for {user,context } and {item} respectively”, from which it is understood that the framework’s notion of a query is inclusive of representations of a user and a context) and determining a similarity between the user features and cluster identifiers (Yi’s page 4, left column, discussing “Nearest Neighbor Search: Once the embedding functions u,v are learned, inference consists of two steps: 1) computing query embedding u(x, θ); 2) performing nearest neighbor search over a set of item embeddings that are pre-computed from embedding function v. Moreover, our modeling framework offers the option to choose an arbitrary set of items to serve at inference time. Instead of computing the dot product over all items to surface top items, low-latency retrieval is commonly based on a highly efficient similarity search system built on hashing techniques, e.g., [2, 10, 25], for approximate maximum inner product search (MIPS) problems. Specifically, compact representations of high dimensional embeddings are built through quantization [20] and end-to-end learning of coarse and product quantizers [36].”, and it would be obvious to extend computation to be inclusive of item clusters and nearest neighbor considerations as Zhang teaches in its section 3), wherein outputting the ranked subset of the candidate virtual experiences is based on the cluster identifiers (Xu’s “recall module” which determines similarity among already graphed information, per Xu’s page 3, second full paragraph, which operates on information that is derived from immediate/direct machine-learning approaches/techniques as already discussed per claim 1, and hence as a next-step characterization for similarity, it can be understood to be a separate model that is also built on machine learning, and where similarity as determinable by the recall module is a basis for ranking per page 11’s 7th full paragraph: “... obtaining the candidate media account vector of similarity ranking, can be obtained by the number or proportion of ranking precedence, for example, obtaining 50 candidate media account vector ranking front, or obtaining the candidate media account vector accounting for 2 % of the total number of all media account vectors before being ranked. in the step 104, generating the recommendation information based on a plurality of recall media account number.”). The motivation for combining the references is as discussed above in relation to claim 16.
Regarding claim 20, Yi in view of Zhang and further in view of Xu teach the recommendation system of claim 19, as discussed above. The aforementioned references teach the additional limitations wherein the cluster identifiers represent a past interaction history of the user (Yi teaches mapping of features/embeddings for both users and items in the form of vectors in a multi-dimensional space, see e.g., section 3’s first paragraph, found on page 3’s left column; and also Zhang’s page 6’s Figure 1 and also the bottom paragraph found on page 25).
9. Claim 6-7 are rejected under 35 U.S.C. 103 as being unpatentable over Yi in view of Zhang and further in view of U.S. Patent Application Publication No. 2017/0279905 (“Shah”).
Regarding claim 6, Yi in view of Zhang and further in view of Shah teach the method of claim 1, as discussed above. The aforementioned references teach the additional limitation wherein training the user tower or the item tower of the machine-learning model includes generating a users-users-experience graph (Yi’s modelling based on vectors in a multi-dimensional space, e.g. per Yi’s section 3, first paragraph, as found on page 3’s left column, and Xu’s additional teaching of a graph, as discussed per claim 1, as found at paragraph at bottom of page 13 through top of page 14, and 4th – 5th full paragraphs on page 14, and 1st – 2nd full paragraphs on page 15, and paragraph at the bottom of page 16 which continues onto page 17, the graph being understood by the Examiner to capture user and media account relationships) that is formed by: generating edges between user nodes and virtual experience nodes based on users associated with the user nodes interacting with the virtual experiences associated with the virtual experience nodes, wherein the edges between the user nodes and the virtual experience nodes are based on user affinity (Yi’s section 3, as found on page 3, top of right column, discussing “The goal is to learn model parameter θ from a training dataset of T examples, denoted by T := {(xi,yi,ri)}T i=1, where (xi,yi) denotes the pair of query xi and item yi , and ri ∈ R is the associated reward for each pair”, such that it is clear that user and video are graphed/mapped to indicate a link/relationship, and where it follows that if taken and subject to graphing as Xu contemplates, then this link between a user and a video would be an edge as recited, and further where Yi’s page 5, right column, discussing Training Labels, it is expressed that a user’s engagement is subject to analysis and embedding/featurizing (i.e., “affinity” as recited): “Video clicks are used as positive labels. In addition, for each click, we construct a reward ri to reflect different degrees of user engagement with the video. For example, ri = 0 for clicked videos with little watch time. On the other hand, ri = 1 indicates the whole video got watched. The reward is used as example weight as shown in Equation (4)”) but not generating edges between the user nodes based on users corresponding to the user nodes interacting with one or more same two virtual experiences, wherein the edges between the users are based on a number of same user actions performed between the two corresponding user nodes. Rather, the Examiner relies upon SHAH to teach what Yi etc. lack, see e.g., Shah’s [0050] discussing “Edges (e.g., 552, 554, 556, and 558) of the social interaction graph 500 can be links to user records corresponding to users that have had an interaction on the collaborative software application. As a specific example, the user corresponding to user record 502 has had interactions with the users corresponding to user records 560, 570, 580, and 595 as indicated by the edges 552, 554, 556, and 558 respectively. Each edge can have a score or weight based on a quantity and quality of interactions between the users connected by the edge. For example, if the user corresponding to user record 502 (abbreviated as user 502) and the user 560 have more and higher quality interactions than the user 502 and the user 570, then the edge 552 will have a higher score than the edge 554.”
Like Yi, Shah is directed to using a graph-based approach to better understand information/data in pursuit of a machine learning objective. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Shah’s user to user edge determinism with Yi’s data as graphed in view of Xu, with a reasonable expectation of success, such as to better understand the Yi data as graphed per Zhang for example in the instance where it is an advantage to associate users in the graph to derive a latent understanding for example, as is a common objective in machine learning frameworks such as Yi, Zhang, and Shah.
Regarding claim 7, Yi in view of Zhang and further in view of Shah teach the method of claim 6, as discussed above. The aforementioned references teach the additional limitations for generating user clusters from the user embeddings by: retrieving one or more users associated with one or more corresponding user nodes in the users-users-experience graph with limited user engagement; and generating the user clusters from the one or more users and corresponding item embeddings based on similarities between the one or more users and the corresponding item embeddings (Zhang’s page 13, section 3.1.2 discussing clustering of both users and items as used to recommend both users and items, where the framework’s approach is amenable to cold start scenarios that exceed essentially the zero-information case as discussed above in relation to claim 1 (in reference to Zhang’s section 4.6.2’s last paragraph on page 25)). The motivation for combining the references is as discussed above in relation to claim 6.
10. Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over Yi in view of Zhang and further in view of U.S. Patent Application Publication No. 2020/0005134 (“Ramanath”).
Regarding claim 10, Yi in view of Zhang teach the method of claim 1, as discussed above. While user embeddings and item embeddings are taught, per Yi as discussed per claim 1, the aforementioned references do not teach the additional limitation wherein the user embeddings and the item embeddings are generated offline. Rather, the Examiner relies upon RAMANATH to teach what Yi etc. otherwise lack, see e.g., Ramanath’s [0144] discussing a step to “generate member embeddings as part of an offline workflow for index building” as part of a machine learning framework.
Like Yi, Ramanath is directed to techniques to better understand information/data in pursuit of a machine learning objective. Hence, the references are similarly directed and therefore analogous. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate Ramanath’s offline processing to extend processing capability to other systems or times, as applied to a framework such as Yi’s, with a reasonable expectation of success, such as to make efficient and optimal use of computing resources as might be mandated by those resources and how they are managed among other workloads.
11. Claim 8 is rejected under 35 U.S.C. 103 as being unpatentable over Yi in view of Zhang and Shah and further in view of Xu.
Regarding claim 8, Yi in view of Zhang and further in view of Shah teach the method of claim 7, as discussed above. The aforementioned references Yi etc. teach the additional limitations further comprising wherein the machine-learning model is a first machine-learning model (Yi’s Figure 1: two-tower DNN model, as discussed above per claim 1) and but not the additional limitation further comprising: training a second machine-learning model to rank a subset of the candidate virtual experiences to recommend to the user, wherein training the second machine-learning model is based on the user clusters. Rather, the Examiner relies upon XU to teach what Yi etc. otherwise lack, see e.g., Xu’s “recall module” which determines similarity among already graphed information, per Xu’s page 3, second full paragraph, which operates on information that is derived from immediate/direct machine-learning approaches/techniques as already discussed per claim 1, and hence as a next-step characterization for similarity, it can be understood to be a separate model that is also built on machine learning, and where similarity as determinable by the recall module is a basis for ranking per page 11’s 7th full paragraph: “... obtaining the candidate media account vector of similarity ranking, can be obtained by the number or proportion of ranking precedence, for example, obtaining 50 candidate media account vector ranking front, or obtaining the candidate media account vector accounting for 2 % of the total number of all media account vectors before being ranked. in the step 104, generating the recommendation information based on a plurality of recall media account number.”). The motivation for combining the references is as discussed above in relation to claim 7.
Like Yi, Xu relates to a two/double-tower model to recommend media/video to a user based on characteristics/features of both the user and the media/video. Hence, the references mentioned here are strongly analogous as being directed to the same subject matter. It would have been obvious to incorporate Xu’s use of a graph to better / more concretely model a user’s engagement with media/video as a function of time, with Yi’s framework and with a reasonable expectation of success, such as to improve upon Yi’s consideration of modeling a user’s streaming as a function of time (Yi’s page 2, left column, 1st full paragraph: “In contrast to MLP model where the output item vocabulary is stationary, we target the streaming data situation with vocabulary and distribution changes over time.”, and Yi’s page 5, right column, section 5.1’s 4th full paragraph: “One example is a sequence of k video ids the user recently watched.”).
Claim Rejections - 35 USC § 112
12. The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
13. Claims 1-20 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
At minimum, each of the independent claims recite language for subject matter relating to graphed data “with limited user engagement.” The term “limited” as used here to qualify the requisite user engagement as recited is a subjective term that is not defined in the claims themselves. The Examiner notes that the breadth of possibility for what might constitute limited user engagement may vary person to person, platform to platform, implementation to implementation, and so forth. See MPEP 2173.05(b).
The dependent claims inherit the features and deficiencies of the independent claims from which they depend, and are therefore rejected under the same rationale.
Allowable Subject Matter
14. Subject to Applicants’ overcoming the rejection under 35 U.S.C. 112(b), claims 11-15 will be allowed.
15. Claim 9 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims and if Applicants are able to overcome the rejection under 35 U.S.C. 112(b).
Conclusion
16. The prior art made of record and not relied upon is considered pertinent to Applicants’ disclosure:
U.S. Patent Application Publication No. 2022/0326840
CN 107103499 A
Non-Patent Literature “THEY LIKE COMEDY, DON’T YOU? A CLUSTER-BASED META-LEARNING FOR COLD-START RECOMMENDATION” (Jiang)
Non-Patent Literature “Addressing cold-start problem in recommendation systems” (Lam)
Non-Patent Literature “Addressing Cold Start in Recommender Systems with Hierarchical Graph Neural Networks” (Maksimov)
Non-Patent Literature “An enterprise-friendly book recommendation system for very sparse data” (Desai)
Non-Patent Literature “Systematic Approach for Cold Start Issues in Recommendations System” (Sarumathi)
Non-Patent Literature “Eliciting Auxiliary Information for Cold Start User Recommendation: A Survey” (Abdullah)
Non-Patent Literature “A Novel Approach for Collaborative Filtering to Alleviate the New Item Cold-Start Problem” (Sun)
Non-Patent Literature “Alleviating the data sparsity problem of recommender systems by clustering nodes in bipartite networks” (Zhang)
Non-Patent Literature “Real-time Retrieval for Recommendations”
Non-Patent Literature “Improving Service Recommendation by Alleviating the Sparsity with a Novel Ontology-based Clustering” (Rupasingha)
Non-Patent Literature “Improving the Cold Start Problem in Music Recommender Systems” (Cao)
Non-Patent Literature “Promoting Cold-Start Items in Recommender Systems” (Liu)
Non-Patent Literature “The history of Amazon's recommendation algorithm” (Hardesty)
Non-Patent Literature “Addressing the Item Cold-start Problem by Attribute-driven Active Learning” (Zhu)
17. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHOURJO DASGUPTA whose telephone number is (571)272-7207. The examiner can normally be reached M-F 8am-5pm CST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tamara Kyle can be reached at 571 272 4241. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SHOURJO DASGUPTA/Primary Examiner, Art Unit 2144