DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Claims
The office action is being examined in response to the application filed by the applicant on 9/21/2023.
Claims 1-20 are pending and have been examined.
This action is made NON-FINAL.
Priority
Acknowledgment is made of applicant’s claim for Non-Provisional priority to allowed Application 18/208,199, filed on 6/9/2023.
Claim Rejections - 35 USC § 112
The following is a quotation of the first paragraph of 35 U.S.C. 112(a):
(a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention.
The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112:
The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention.
Claims 1, 4-8, 11-14, and 19-20 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, for failure to comply with the written description requirement. The claims contain subject matter which is not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, has possession of the claimed invention. The claims recite, in claims 1, 5-8, 12-15, and 19-20, deep learning models, in claims 1, 4-6, 8, 11-13, 15, and 18-20, first pass recommenders, in claims 4, 11, 18, a separately trained machine learning models in claims 7 and 14, a multi-task deep machine learning model, each recited without specificity as to what steps the models perform.
The specification discloses in [0095], [0098], and [0099], that there may be more than one of the deep machine learning models and more than one of the first pass recommendation models. Additionally, the specification discloses in [00100], that the deep machine learning models may be the same as a multi-task deep learning models, such that these deep learning models may be inclusive of ANY deep learning models without clear limitation of models or steps that are performed by the models outside of feeding them data and returning the same data in a different, more organized presentation. This is further clarified in the specification at paragraph [0053] that discloses “it should be noted that while the model is described herein as a multi-task deep machine learning model, other types of machine learning models, such as tree-based models, may be used instead of a multi-task deep machine learning model,” and [0056], discloses “the multi-task deep machine learning model is a deep convolutional neural network.” The specification also discloses in [0029] “The deep machine learning model may then output this ranking, which may be used either by another non-machine learning component or by another machine learning model to determine which cohort(s) to display to a user,” i.e. the deep machine learning model and first pass recommendation model, that do not perform selection of cohorts. The specification further discloses in [0095], that first pass recommender may be one or more of multiple recommender models, and explicitly discloses “the first pass recommenders 1140 may be their own machine learning models,” like a “people-you-may-know (PYMK) model” or a “discussion group ranking model,” both of which can categorically be deep learning models themselves.
Since the claims recite deep learning models and first pass recommender models that could reasonably be general machine learning models or general deep learning models, are recited and disclosed without explicit or implicit limits to the invention beyond broadly reciting categories of machine learning models or deep machine learning models, these broadly recited and disclosed models perform the ranking of both the cohorts and the items within the cohorts in claims 1, 4, 8, 11, 15, and 18, without written description support as to what functions, steps, or processes are performed within the models. The claims still represent merely data in and data out of these models, where the functions of the models themselves are withheld from the disclosure without further detail explaining how these outcomes are technically achieved. The machine learning models in claims 5-6, 12-13, and 19-20, perform the same tasks as claims 1, 8, and 15, adding iterative repetition of the input and output of data between the 2 models without further detail explaining how these outcomes are technically achieved. In claims 6, 13, and 20, the claims further recite that the models are retrained, but further detail as to the processes of retraining or the functions of the models themselves are withheld from the disclosure without further detail explaining how these outcomes are technically achieved. Lastly, claims 7 and 14 recite a deep machine learning model and a multi-task deep machine learning model trained to optimize propensity without further detail as to the processes of training, while the functions of the models themselves are withheld from the disclosure without further detail explaining how these outcomes are technically achieved. Therefore, the claims lack the written description support for these machine learning models in a manner that supports the tasks that the models perform.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-20 are rejected under 35 U.S.C. 101 because the claims are directed to an abstract idea without much more.
Independent Claims
Regarding Claims 1, 8, and 15: 2A Prong 1: The claims recite: receiving indication of a first user accessing second user content, obtaining users relationship information, feeding data into model, outputting ranked data characteristics cohorts, causing selection of ranked cohorts, obtaining ranked data within cohort, display ranked data, which are abstract ideas in the category of “certain methods of organizing human activity,” more specifically Managing Personal Behavior or Relationships or Interactions Between People. In these claims, the non-descript deep learning model that receives user data merely outputs data that is ranked by the characteristics of the data, without detailing the tasks performed by the machine learning model. Further, the limitation of obtaining items ranked within each chosen cohort, recites another non-descript first pass recommender model, without detailing the steps performed by the model itself. Therefore, each of the models recited in these claims is merely performing an abstract idea, adding to the abstract ideas in the rest of the claims, to rank data that has been organized into similarity cohorts of data and ranking data within the cohorts to send to a graphical user interface for display to users. The claims input data, user data, relationship data, and similarity data between items related to users, to recommend iterative interim and final recommendations of items to display, driving interactions and relationships between the users and the behavior or each user. Receiving data, obtaining relationship data, feeding data, causing selection of data, and displaying data are also in the category of “mental processes,” or “things that can be performed in the human mind, or by a human using a pen and paper” that encompass observations, evaluations, judgments, and opinions (MPEP 2106.04(a)(2)(II) and (III)). The claims recite an abstract idea.
2A Prong 2: Claims 1 and 15 each recite the following additional computing elements: in claim 1, A system, processors, a non-transitory computer-readable medium; in claim 15: A non-transitory machine-readable medium, processors; and in in claims 1, 8, and 15: the online network and a graphical user interface. The specification discloses in [Figure 15], and paragraphs [00111], [00111], and [00115] that the computer structures recited in the claims, the computing systems, processors, non-transitory computer readable medium, and associated hardware are general purpose computing structures that are recited at a high level of generality. Instructions to apply abstract ideas in a technical environment or on a general-purpose computing structures, adding the words “apply it,” where the computing structures are merely tools to perform the abstract idea, are not indicative of a practical application of an abstract idea (MPEP 2106.05(f)).
The claims recite the following additional data elements: an indication of access, a piece of content, information, relationship, features of the first user, features of the second user, information about relationships, ranking of cohorts, ranking cohorts, selected cohort, a ranking of items, and items. These are non-functional descriptive information limitations that do are not abstract ideas, do not carry patentable weight, and cannot be relied on to integrate the abstract idea into a practical application.
Each of claims 1, 8, and 15, recite the following additional elements: a deep machine learning model and a first pass recommender model. The claims do not limit how the feeding, outputting, or obtaining from occur and the specification does not impose limits to these functions. Therefore, the claims do not recite, and the specification does not discloses details as to what the machine learning models do, i.e. what actions these models perform or how the models perform the actions that input the data into the models, translate the fed data into the organized data within the models, or output the organized data. The specification discloses in [0095], [0098], and [0099], that there may be more than one of the deep machine learning model and more than one of the first pass recommendation model. Additionally, the specification discloses in [00100], that the deep machine learning model may be the same as a multi-task deep learning model, such that these deep learning models may be inclusive of ANY deep learning model without clear limitation of models or steps that are performed by the model outside of being fed data and returning the same data in a different presentation. This is further clarified in the specification at paragraph [0053] that discloses “it should be noted that while the model is described herein as a multi-task deep machine learning model, other types of machine learning models, such as tree-based models, may be used instead of a multi-task deep machine learning model,” and in [0056], discloses “the multi-task deep machine learning model is a deep convolutional neural network.” The specification also discloses in [0029] “The deep machine learning model may then output this ranking, which may be used either by another non-machine learning component or by another machine learning model to determine which cohort(s) to display to a user,” such that the deep machine learning model and first pass recommendation model may not be the only machine learning models implemented in the claims for causing selection of cohorts. The specification further discloses in [0095], that first pass recommender may be one or more of multiple recommender models, explicitly disclosing “the first pass recommenders 1140 may be their own machine learning models,” like a “people-you-may-know (PYMK) model” or a “discussion group ranking model,” both of which can categorically be deep learning models themselves. Since all of these models may be deep learning models, the claims recite deep learning models and first pass recommender models that may be deep learning models, without explicitly or implicitly limiting the invention beyond broadly recited categories of machine learning models or deep machine learning models to perform the ranking of both the cohorts and the items within the cohorts without explaining how these outcomes are technically achieved. Both models, disclosed without specificity and recited without explaining how these outcomes are technically achieved, are, under the broadest reasonable interpretation of a person having ordinary skill in the art, merely steps that receive data and/or return organized data to facilitate display to a user’s graphical user interface reciting the abstract idea. The organized data that is returned alters user behaviors or relationships and interactions between users. Therefore, these machine learning model steps are merely abstract ideas, implemented by any general deep machine learning models where some may just be any general purpose machine learning models to perform the data organization for display. The claims merely generally link the use of the abstract idea to machine learning models, and generally link the machine learning models performing abstract ideas to the field of use of displaying the best next selections for users in an online network (MPEP 2106.05(h)). These models are broadly claimed and broadly disclosed in the specification at a high level of generality, such that the claims are merely adding the words “apply it” with the abstract ideas, implementing the abstract idea with a machine learning model used as a tool to perform the abstract ideas. Therefore, these additional elements do not integrate the claims into a practical application, either independently or as an ordered pair in the claims as a whole (MPEP 2106.04(f)).
The claims recite the following limitations: receive indication data, obtain (receive) data, feed (send) data, output (send) data, obtain (receive) data, select data, and display data, which are merely sending, receiving, selecting, and displaying data. The specification does not reveal advances to databases, database architecture, storage techniques, data manipulation, or the theory, architecture, functioning of networks, or networking techniques. The specification does not reveal that these limitations offer improvements to: the computers themselves, to the functioning of the computer, to algorithm architecture, machine learning technologies, multi-task deep machine learning technologies, deep machine learning technologies, iteratively training/retraining models, or optimizing propensity to select items to display computations, statistics, or mathematics, or improvements in the areas of sending, receiving, storing, accessing, registering, feeding, obtaining, selecting, displaying, recommending, predicting, generating, comparing, or characterizations of data, parameters, input, content, features, or advances to graphical user interface or other display technologies. The system is related to using input data, user data, relationship data, and similarity data between users to recommend iterative interim, iterative data and groups of data to get to a final data recommendations to display on a graphical user interface, driving interactions and relationships between the users and the behavior between users, but the disclosure does not reveal an application of these abstract ideas in a meaningful way beyond generally linking the use of the abstract ideas to the particular technological environments, general machine learning models, or general purpose computing structures. The claims, as a whole, are merely drafting efforts designed to monopolize the exception (MPEP 2016.05 (a), (e), (f), and (h)). The claims are directed to an abstract idea.
Step 2B: This analysis for Step 2B is commensurate with the analysis above for step 2A, Prong 2. Therefore, for the same reasons disclosed above, the additional elements that do not integrate the judicial exception into a practical application, when taken individually and in combination, also do not result in the claim as a whole amounting to significantly more than the identified judicial exception (MPEP 2016.05). The claims are directed to an abstract idea without significantly more.
Dependent Claims
Claims 2, 6, and 9 merely further identify the data without reciting an abstract idea. The claims define the piece of content as a user profile belonging to the second user. The claims do not recite additional elements, therefore there are no additional elements that integrate the claim into a practical application or significantly more.
Claims 3, 10, and 17 merely further identify the user cohorts to include user cohorts and product cohorts, where items within the user cohorts are users of an online network and items within the product cohort to include products of the online network. The additional element of the claims is the online network, however, the online network is recited at a high level of generality. The specification discloses in [0002], that the online network is merely any social networking service. This claim generally links the non-abstract idea, data limitations to the element of an online network without integrating the claim as a whole into a practical application or significantly more.
Claims 4, 11, and 18 recite that the first pass recommender Is a separately trained machine learned model for each selected cohort. The claims add a refinement to the recommender model recitation that performs the outputting function of the independent claim. Substituting a separately trained machine learning model for each cohort for the first pass recommender without claiming the steps performed in training the model, or the steps performed by the model without explaining how these outcomes are technically achieved, still amounts to “apply it,” mere instructions to apply the abstract idea from the independent claim on a “separately trained machine learned model for each selected cohort,” using any machine learning model that may be trained, without disclosing limits to receiving, computing, or returning data. Therefore, these additional elements, the first pass recommender and a separately trained machine learned model, are not indicative of integration into a practical application. Further, for the same reasons, the additional elements are not enough to amount to significantly more than the abstract idea. The claims are directed to an abstract idea without significantly more.
Claims 5, 12, and 19 recite output signals (i.e. data) functions, input signals, and an iterative function. Inputting data and outputting data into machine learning models is an abstract idea in the same category as the independent claim, "Certain Methods of Organizing Human Activity" for inputting and outputting user “signal” data iteratively to form iteratively formulate the output data to the user that drives user behavior and the relationships between users according to the data. Inputting and outputting data are also merely sending and receiving data, while iterating these steps is merely repeating the sending and receiving of data. The specification does not reveal advances to inputting and outputting data iteratively. The selected cohorts are characterizations of data, i.e. non-functional descriptive information. The additional elements are the first pass recommender, which may be a deep machine learning model, and the deep machine learning model, both recited and disclosed at a high level of generality as analyzed above for the independent claims, without offering clarity as to what steps the models are actually performing within the broadly claimed generalized model recitations. Therefore, these claims are performing abstract ideas, adding the words “apply it,” i.e. the claim is mere instructions to apply the abstract idea using unspecified machine learning models and/or deep machine learning models. Iterative training and dynamic adjustments are not improvements to the nature of machine learning and do not constitute an inventive concept, and the claims, nor the specification disclose any specific method for improving machine learning algorithms or achieving technological advancements. Instead, the claims rely on generic machine learning techniques. The additional elements and not indicative of integration into a practical application, and for the same reasons, do not amount to significantly more than the abstract idea.
Claims 6, 13, and 20 recite that the first pass recommender is retrained based on output of the deep learning machine learning model and the deep machine learning model is retrained based on the output of the first pass recommender, iteratively as recited in the claims for which they depend. Retraining a model is an abstract idea in the same category as the independent claim because the data that is utilized to retrain the model is user data, where the general-purpose deep machine learning models and/or first pass recommender model are recited a high level of generality and ultimately lead to displaying user data that drives user behavior and the relationships between users. The one or more signals are characterizations of data, i.e. non-functional descriptive information. The additional elements are the first pass recommender, which may be a deep machine learning model, and the deep machine learning model, both recited and disclosed at a high level of generality as analyzed above for the independent claims, without offering clarity as to what steps the models are actually performing within the broadly claimed generalized model recitations. Therefore, these claims are performing abstract ideas, adding the words “apply it,” i.e. mere instructions to implement the abstract idea using unspecified machine learning models and/or deep machine learning models. The additional elements are not indicative of integration into a practical application, and for the same reasons, do not amount to significantly more than the abstract idea.
Claims 7 and 14 further clarify that the deep machine learning model is a multi-task deep learning model that is trained to optimize propensity to select an item from a cohort if items from a first cohort are displayed to the first user, and propensity for long-term engagement with an online network if items from the first cohort are displayed to the first user. The claims recite two functions of optimizing propensity to select an item, which are abstract ideas in the same category as the independent claims, "Certain Methods of Organizing Human Activity" for Managing Personal Behavior or Relationships or Interactions Between People, which are recited a high level of generality and ultimately lead to displaying user data that drives user behavior and the relationships between users. The cohort and items from a first cohort are characterizations of data, i.e. non-functional descriptive information. The additional elements are the deep machine learning model and the trained multi-task deep machine learning model, both recited and disclosed at a high level of generality as analyzed above for the independent claims, without offering clarity as to what steps the models are actually performing within the broadly claimed generalized model recitations, how the multi-task deep learning model is trained, or how the model performs the optimization of propensity to either select an item from a cohort or for long-term engagement with an online network, if items from the first cohort are displayed to the first user, where the claims do not recite and the specification does not disclose the steps performed by the generalized models beyond the intended results of what data that the optimized propensity returns. Therefore, these claims are performing abstract ideas, adding the words “apply it,” i.e. mere instructions to implement the abstract idea using unspecified deep machine learning models that are unspecified multi-task deep machine learning models. The additional elements are not indicative of integration into a practical application, and for the same reasons, do not amount to significantly more than the abstract idea.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 8-10, and 15-17 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Flinn, US20240046311A1.
Claims 1, 8, and 15: Flinn discloses: receiving an indication of accessing a piece of content in an online network by a first user, the piece of content associated with a second user; [0069] (users are objects), [0070] “The one or more users 200 interact with the content aspect 230,” [0072] “The usage aspect 220 denotes captured usage information 202, further identified as usage behaviors,” (captured correlates with receiving), “reflects the tracking, storing, categorization, and clustering of the use and associated usage behaviors of the one or more users 200 interacting,” [0091] “may be in the context of ... a currently accessed object 232, or a communication with another user 200,” and [0097] “interactions among the one or more users;”
obtaining information about a relationship between one or more features of the first user and one or more features of the second user; [0072] “reflects the tracking, storing, categorization, and clustering of the use and associated usage behaviors of the one or more users 200 interacting,” [0087] “depicts a hypothetical structural aspect 210, including a plurality of objects 212 and associated relationships 214,”(where users are objects);
feeding the information about the relationship into a deep machine learning model, the deep machine learning model outputting a ranking of cohorts, each cohort comprising a plurality of items sharing at least one characteristic; [0075] “The usage behavior pre-processing 204 may also determine new “clustering’s,”[0080] “the adaptive system 100 identifies the preferences of the user 200 and adapts the adaptive system 100 in view of the preferences. Preferences describe the likes, tastes, partiality, and/or predilection of the user 200 that may be inferred during access of the objects 212 of the adaptive system,” [0081] “infers preferences based on information that may be obtained as the user 200 accesses the adaptive system 100 … The preference … and associated output 242,” [0084] “As used herein, preferences (whether explicit 252 or inferred 253) … imply a ranking (e.g., object A is better than object B),” [0087] “The adaptive recommendations 250 are presented as structural subsets of the structural aspect 210. FIG. 4 depicts a hypothetical structural aspect 210, including a plurality of objects 212 and associated relationships 214,” [0085] “The adaptive recommendations 250 may be augmented by automated inferences and interpretations about the content within individual and sets of objects 232 using statistical pattern matching may include, but is not limited to … neural network techniques(neural networks are deep machine learning models), [0088-0090] (descriptions of multiple structural subsets of objects grouped by relationship), [0089] “The structural subsets 280 depicted in FIG. 4 represent but three of a myriad of possibilities from the original network of objects,” (implying that three subsets were chosen from the myriad of subsets, where subset is synonymous with cohort);
causing selection of at least one cohort in the ranking of cohorts, based on the ranking; [0084] “in that preferences imply a ranking (e.g., object A is better than object B),” [0089] “The structural subsets 280 depicted in FIG. 4 represent but three of a myriad of possibilities from the original network of objects,” (implying that three subsets were chosen, i.e. ranked the highest, from the myriad of subsets, where subset is synonymous with cohort), [0147] “The recommended structural subsets 280 along with associated content may constitute most or all of the user interface that is presented to the recommendations recipient, on a periodic or continuous basis;”
“and for each selected cohort:
obtaining a ranking of one or more items within the selected cohort from a first pass recommender; and [0084] “in that preferences imply a ranking (e.g., object A is better than object B),” [0085] ”recommendations 250 may be augmented by automated inferences and interpretations about the content within individual and sets of objects 232 using statistical pattern matching,” (statistical pattern matching is what the first pass recommender performs), [0151]” will be weighted as more important than other… in generating the recommendation 250…characteristics of objects 21 which are explicitly stored or tagged by the user 200 in a personal structural aspect 210 would typically be a particularly strong indication of preference… The recommendations…may thus prioritize this type of information to be more influential in driving the adaptive recommendations 250,” [0152] “then the object would typically rank low for inclusion in a set of recommended objects” (explicitly disclosing low ranking of objects in sets, i.e. items in cohorts, implying that the items within cohorts are ranked prior to recommendation display);
causing display of one or more items within the selected cohort in a graphical user interface, based on the ranking of items. [0084] “preferences imply a ranking (e.g., object A is better than object B),” [0107] “providing adaptive recommendations directly to individual users or to or groups of users (communities),” [0147] “The recommended structural subsets 280, along with associated content may constitute most or all of the user interface that is presented to the recommendation’s recipient, on a periodic or continuous basis. Such embodiments correspond to the continuous, fully adaptive interface.”
Claims 2, 9, and 16: Flinn discloses: The system of claim 1, wherein the piece of content is a user profile and the second user is the user to whom the user profile belongs. [0093] “System navigation and access behaviors include usage behaviors 270 such as accesses to, and interactions with, objects…the viewing or reading of displayed information,” (i.e. interaction with, viewing or reading second user profile, or access to objects, where users and profile content in the prior art are also objects), [0096] (user profile). [0098] “collaborative behaviors include, but are not limited to, … contributions of content or other types of objects for the benefit of others,” (profiles for the benefit of others to access and interact with), [0089] “The structural subsets 280 depicted in FIG. 4 represent but three of a myriad of possibilities from the original network of objects,” (implying that three subsets were chosen, i.e. ranked the highest, from the myriad of subsets, where subset is synonymous with cohort),
Claims 3, 10, and 17: Flinn discloses: The system of claim 2, wherein the one or more selected cohorts include a user cohort and a product cohort, wherein items within the user cohort are users of the online network and items within the product cohort are products of the online network. [0092] “usage behaviors 270 may be associated with the entire user community, one or more sub-communities, or with individual users of the adaptive system 100,” [0094] “System navigation and access behaviors may also include executing transactions, including commercial transactions, such as the buying or selling of merchandise, services, or financial instruments,” and [Figure 10] (286 reveals adaptive recommendation cohorts based on users and 288 reveals adaptive recommendation cohorts based on objects), [Figure 4]
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 4-6, 11-13, and 18-20 are rejected under 35 U.S.C. 103 as being unpatentable over Flinn, US20240046311A1 in view of Sahasi, US20230004833A1.
Claims 4, 11, and 18: Sahasi teaches: The system of claim 1, wherein the first pass recommender is a separately trained machine learned model for each selected cohort. [0144] “may select the client model 1550 from the plurality of machine learning models based on clusters of clients with similar characteristics,” [0180]” machine learning model may be selected …the based on the plurality of client clusters … may be associated with a first client cluster of the plurality of client clusters,” [0188] “each triggering event of the plurality of triggering events may be associated with a client identifier of the plurality of client identifiers and a machine learning model of the plurality of machine learning models.”
It would be reasonable, before the effective filing date, to combine the disclosures of Flinn and Sahasi. Flinn discloses an online network that uses machine learning and deep machine learning models to provide cohorts. Sahasi discloses separate machine learning models for each cluster, i.e. cohort. Therefore, it would have been obvious to combine the prior arts of Flinn with Sahasi, where both prior art disclosures includes each element of the instant application, though not in a single reference. A person having ordinary skill in the art could have combined the disclosure elements, from the same field of use, according to known methods in the field of computers science, to achieve the same performance as combined, that each function has separately, such that the results of the combination is predictable and thus results in a combined set of disclosures that are obvious over the instant application.
Claims 5, 12, and 19: Flinn discloses: The system of claim 4, one or more signals. [0073] “The adaptive system 100 tracks and stores user key strokes and mouse clicks, for example, as well as the time period in which these interactions occurred (e.g., timestamps), as captured usage information 202. From this captured usage information 202, the adaptive system 100 identifies usage behaviors 270 of the one or more user’s 200…usage behavior categories 246, usage behavior clusters 247, and usage behavioral patterns 248 are formulated for subsequent processing of the usage behaviors 270 by the adaptive system,” (signals), [0112] (the system processes signals and cues into preferences and interests);
Where Finn does not disclose: wherein the first pass recommender outputs one or more signals to the deep machine learning model and the deep machine learning model outputs one or more signals to the first pass recommender in an iterative fashion.
Sahasi teaches: [0158] “Based on the inferences that may be drawn from a previous model, features may be added and/or deleted from the subset… is an iterative method,” [0193] “triggering event … may be associated with … a machine learning model of the plurality of machine learning models,” [0189] “may retrain the plurality of machine learning models based on the at least one triggering event,” (where a triggering even is the output of one or more signals from the alternate model.)
Flinn discloses signals, and Sahasi discloses the interactive machine learning models feeding signals iteratively. it would be reasonable, before the effective filing date, to combine the disclosures of Flinn and Sahasi, where both prior art disclosures includes each element of the instant application, though not in a single reference. A person having ordinary skill in the art could have combined the disclosure elements, from the same field of use, according to known methods in the field of computers science, to achieve the same performance as combined, that each function has separately, such that the results of the combination is predictable and thus results in a combined set of disclosures that are obvious over the instant application.
Claims 6, 13, and 20: Sahasi teaches: The system of claim 5, wherein the first pass recommender is retrained based on output of the deep machine learning model and the deep machine learning model is retrained based on output of the first pass recommender. [0193] “triggering event of the plurality of triggering events may be associated with … a machine learning model of the plurality of machine learning models,” [0189] “may retrain the plurality of machine learning models based on the at least one triggering event,” (where a triggering even is the output of one or more signals from the alternate model.)
Claims 7 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Flinn, US20240046311A1 in view of Saha, US20180300334A.
Claims 7, and 14: Saha teaches: The system of claim 1, wherein the deep machine learning model is a multi-task deep machine learning model trained to optimize propensity to select an item from a cohort if items from a first cohort are displayed to the first user, and propensity for long-term engagement with an online network if items from the first cohort are displayed to the first user. [0013-0014] (a multi-task engine trained to optimize content items to be displayed from a list of items, using multiple competing constraints, gauging user propensity to engage with the item and propensity for a desired total level of engagement), [0015] (optimizations provide engagement maximization for items displayed.)
Flinn discloses social media or online network cohort organization according to relationship. Saha teaches the multi-task machine learning model trained to optimize propensity. Both prior art, in the same field of use in social media and online networks, together disclose both sets of limitations in these claims. It would be reasonable, before the effective filing date, to combine the disclosures of Flinn and Saha, where both prior art disclosures includes each element of the instant application, though not in a single reference. A person having ordinary skill in the art could have combined the disclosure elements, from the same field of use, according to known methods in the field of computers science, to achieve the same performance as combined, that each function has separately, such that the results of the combination is predictable and thus results in a combined set of disclosures that are obvious over the instant application.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANGELA HATCH whose telephone number is (571)270-1393. The examiner can normally be reached 10:00-6:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nathan Uber can be reached at (571)270-3923. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ANGELA HATCH
Examiner
Art Unit 3626
/ANGELA HATCH/ Examiner, Art Unit 3626
/NATHAN C UBER/ Supervisory Patent Examiner, Art Unit 3626