Prosecution Insights
Last updated: April 19, 2026
Application No. 17/171,365

MACHINE-LEARNING SYSTEMS FOR SIMULATING COLLABORATIVE BEHAVIOR BY INTERACTING USERS WITHIN A GROUP

Non-Final OA §103§112
Filed
Feb 09, 2021
Examiner
SHALU, ZELALEM W
Art Unit
2145
Tech Center
2100 — Computer Architecture & Software
Assignee
Adobe Inc.
OA Round
5 (Non-Final)
29%
Grant Probability
At Risk
5-6
OA Rounds
3y 2m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 29% of cases
29%
Career Allow Rate
31 granted / 108 resolved
-26.3% vs TC avg
Strong +19% interview lift
Without
With
+19.0%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
34 currently pending
Career history
142
Total Applications
across all art units

Statute-Specific Performance

§101
14.3%
-25.7% vs TC avg
§103
63.4%
+23.4% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 108 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the amendment filed on 01/13/2026. Claims 1-20 are pending in the case. Applicant Response In Applicant’s response dated 01/13/2026, Applicant amended Claims 1, 8 and 14 and are and argued against all objections and rejections previously set forth in the Office Action dated 09/22/2025. Continued Examination under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 01/13/2026 has been entered. Claim Interpretation 4. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. 5. The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Claim 14: … a step for predicting a decision that the set of users will make on behalf of the requesting entity during a next duration associated with a future time period. Claim 15: … the step for predicting the decision that the set of users will make on behalf of the requesting entity during the next duration further comprises… Claim 16: … the step for predicting the decision that the set of users will make on behalf of the requesting entity during the next duration further comprises: Claim 18: … the step for predicting the decision that the set of users will make on behalf of the requesting entity during the next duration further comprises: Claim 19: … the step for predicting the decision that the set of users will make on behalf of the requesting entity during the next duration further comprises: Claim 20: … the step for predicting the decision that the set of users will make on behalf of the requesting entity during the next duration further comprises” The specification provides sufficient details such that one of ordinary skill in the art would understand which filter structure or structures perform(s) the claimed function. As per Applicant’s specifications in Figure 6 and [0056]- [0062] illustrates is a flowchart diagram illustrating an example process 600 configured to process user feature datasets, to perform clustering for identifying user segments, and to provide personalized recommendations to lower attrition in accordance with some embodiments. The process 600 may be configured as computer programs (e.g., applications 123) executed on application server 123 or other computers, in which the systems, model components, processes, and embodiments described below can be implemented. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Examiner Comments 7. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5, 7-12, 14-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over by Zhao (Pub. No.: US 20210081759 A1 Pub. Date: 2021-09-30) in view of Raziperchikolaei (Pub. No.: US 20210150337 A1, Pub. Date: 2021-05-20.) in further view of Yen (Pub. No.: US 20070016464 A1, Pub. Date: 2007-01-18) Regarding independent Claim 1, Zhao teaches a computer-implemented method, comprising: receiving, via a user interface (UI), a selection of a requesting entity (see Zhao: Fig.1, [0031], “Browser application 133 may facilitate user interaction with application server 120 and may be configured to transmit information to and receive information from application server 120 via network 110. Client device 130 may be any device configured to present user interfaces and receive inputs thereto.”, i.e. the user interacting with the platform corresponds to “requesting entity” because the user generates interaction requests processed by the system.) identifying, in response to the selection, a set of users collectively representing the requesting entity (see Zhao: Fig.4A, [0033], “Users may choose a variety of online software products or services provided by application server 120 to utilize particular services, such as financial management and tax return preparation and filing, using browser application 133 through client devices 130.”, (i.e., set of users that choose particular services of online software products (entities) are associated and identified with the software product entity )… see also [0038], “The static firmographics features 220 may be stored in user feature database 127 or a separate database. The static firmographics features 210 may include a category of static firmographics features. The static firmographics features may include user type based features to describe the products that users subscribe to use and user background information. The user background information may include which channel a user comes from. For example, the static firmographics features may include three features such as “SKU”, “Channel”, and “Migrator” (e.g., user changed from a standalone mode to online mode). providing, via the UI, in response to the selection, an identity of at least one user of the set of users (see Zhao: Fig.6, [0061], “The values of the concatenated vector x.sub.stack 428 may be fed to a cluster model 125 for identifying the user segments 430. Based on the values of the concatenated vector 428, cluster model 125 may be configured to perform clustering on the concatenated vector x.sub.stack 428 and determine a plurality of user segments 430 with respective segment identifiers for the users.”); for each user of the set of users (see Zhao: Fig.1, [0033], client device 130 is used by each user): accessing, using an activity layer, one or more behavior logs associated with the user captured during a duration (see Zhao: Fig.2, [0040], “The interactive user behavior features 220 (behavior logs) may be stored in user feature database 127 or a separate database.”), each behavior log of the one or more behavior logs characterizing one or more interactions between a user device operated by the user and a system or platform associated with a providing entity external to the requesting entity, the set of users, and the user device (see Zhao: Fig.1, [0040], “The interactive user behavior features 220 may include a plurality of categories representing user behaviors or activities, such as “Product Usage”, “Attach” (e.g., financial behaviors) and “Care” as illustrated in FIG. 2. Each category of the interactive user behavior features 220 may include a plurality of features to represent user behaviors or activities. The data format of the interactive user behavior features 220 may be continuous, categorical, time-series or a combination of all.”); [0035], “User feature database 127 may store user profiles and historical user feature data representing user behaviors during a certain time period while users interact with application server 120 regarding various products or services (e.g., QuickBooks™ products or services) through client devices 130 via network 110. The certain time period may be a 90-day retention of a user subscription to an online product, for example.”) generating a duration vector representation representing the one or more behavior logs captured within the duration (see Zhao: Fig.2, [0042], “The interactive user behavior features 220 may be converted (generated) to high-dimensional time-stamped vectors (a duration vector representation), to represent user behaviors about how users interact with the product.”), the duration vector representation being generated using a first trained machine-learning model (see Zhao: Fig.2, [0042], The interactive user behavior features 220 may be converted (generated) to high-dimensional time-stamped vectors (a duration vector representation), to represent user behaviors about how users interact with the product. In some embodiments, before being fed into the neural network system 124 (a first trained machine-learning model), the interactive user behavior features 220 may be preprocessed through natural language processing (NPL) and transformed into computer readable vectors using any type of word embedding algorithms, such as Global Vectors (Glove), Word2Vec, fast Text, etc.”). See also Zhao disclosing the generation of vector embeddings representing user interaction sequences using neural network models in [0057]-[0060] generating a user vector representation by inputting the duration vector representation into an attention layer, the user vector representation including one or more user- specific features concatenated with an output of the attention layer, (see Zhao: Fig.2, [0045], “Application server 120 may include a neural network system 124 that may be trained to build contextual representations (a user vector representation) based on natural language-based character representations of the interactive user behavior features 220. All of the user feature datasets fed into neural network system 124 may go through a compression process so that different features with vastly different scales and dimensions may be balanced out and produce layers of representative neurons or nodes for each user. As illustrated in FIG. 4A, neural network system 124 may include a neural network 410 and a deep neural network (DNN) 420. The neural network 410 may include multiple layers of bidirectional LSTM (Bi-LSTMs) 412 and an attention layer 414.”), […] and inputting the user vector representation into a second trained machine-learning model that is associated with the user (see Zhao: Fig.4A, [0055], “On a second to last layer of the trained deep neural network 420 (a second trained machine-learning model), the outputs from the cross subnetwork 424 and the deep subnetwork 426 may be concatenated or stacked to generate a concatenated vector substance 428. The values of the concatenated vector substance 428 may be fed to a cluster model 125 which may be configured to perform clustering on the concatenated vector x.sub.stack 428 to identify the user segments 430.”) aggregating using aggregated layer, the output of the second trained machine-learning model associated with each user of the set of users into an entity vector representation representing the requesting entity (see Zhao: Fig.4A, [0055], “On a second to last layer of the trained deep neural network 420 (second trained machine-learning model), the outputs from the cross subnetwork 424 and the deep subnetwork 426 may be concatenated or stacked to generate a concatenated vector x.sub.stack 428 (entity vector representation). The values of the concatenated vector x.sub.stack 428 may be fed to a cluster model 125 which may be configured (aggregating ) to perform clustering on the concatenated vector x.sub.stack 428 to identify the user segments 430.”), the entity vector representation including one or more entity-specific features concatenated with an output of the second trained machine-learning model, the entity-specific feature including an industry of the requesting entity (see Zhao: Fig.4A, Fig.6, [0060], “At 610, values of a concatenated vector x.sub.stack 428 may be extracted from nodes of the second to last layer of the deep neural network 420. The concatenated vector x.sub.stack 428 may represent the most condensed representation of static firmographics features 210 and interactive user behavior features 220 for each user.”) generating, for use by the providing entity, a prediction of a collaboration decision that the set of users will collectively make on behalf of the requesting entity during a next duration (see Zhao: Fig.4, [0071], “the trained DNN 420 and the clustering model 125 may predict on new users (i.e. recent cohort, FY18). The new users' activity (i.e. day 3, 5, 7 and 10 activity), may be fed into the DNN 420 to obtain the values of the concatenated vector x.sub.stack 428 extracted from a second to last layer output. The vector x.sub.stack 428 may be feed to the clustering model 125 configured with the unsupervised algorithm to determine which clusters the users belong to. The new users may be grouped as the predicted cluster and compare to another group of users (i.e. users from same period of time of the previous year instead of the current year”), the colaboration decision regarding whether to request the requesting entity one or more items provided by the providing entity (see Zhao: Fig.7, [0066], “At 706, based on the respective churn rate and churn size of each second user dataset, application server 120 may predict future retention levels and user features for each user segment during a near future time period. For example, different future retention levels may be assigned to the users based on certain thresholds associated with churn rates and churn sizes of user segments.”), the prediction of the collaboration decision being generated by inputting the entity vector representation into a single, fully-connected neural network including a fully-connected layer for each user of the set of users (see Zhao: Fig., [0078] “the trained DNN 420 and the clustering model 125 may predict on new users (i.e., recent cohort, FY18). The new users' activity (i.e., day 3, 5, 7 and 10 activity), may be fed into the DNN 420 to obtain the values of the concatenated vector x.sub.stack 428 extracted from a second to last layer output. The vector x.sub.stack 428 may be feed to the clustering model 125 configured with the unsupervised algorithm to determine which clusters the users belong to.”). see also [0025], discussing group of users that collaborate online, [0025], “large group of users may be classified into user segments (e.g., cohorts, clusters or groups) based on the user data features. Users in a particular segment may be referred to as “cohort” and the related cohort may share common features or behaviors with a subscribed online product within a defined time-span.”). Examiner notes that [0025] describes that large group of uses collaborate to engage with product or services, “a large group of users may be classified into user segments (e.g., cohorts, clusters or groups) based on the user data features. Users in a particular segment may be referred to as “cohort” and the related cohort may share common features or behaviors with a subscribed online product within a defined time-span.”) As shown above, Zhao teaches cohort (collections of users) analysis by applying machine learning deep neural networks to process user behavior features for identifying user segments and providing recommendation and predictions for an individual user. Because the prediction is calculated from interaction data the is collected from multiple users, the resulting predicted outcome reflects a collaborative decision or performance inferred from collective behavior of users. Zhao does not teach a computer implemented method comprising: the user vector representation including one or more user- specific features concatenated with an output of the attention layer, the user specific features including a job of the user generating, using the single, fully-connected neural network, and based on the interactions for each user of the set of users between the user device and the system or platform associated with the providing entity, a prediction parameter representing the prediction of the collaborative decision yet to be made by the set of users including the at least one user, third trained machine-learning model providing a message to the providing entity, the message including the prediction parameter; causing, for the providing entity, in response to the prediction parameter and in association with the system or platform of the providing entity, a transmission of a communication to the user device. However, Raziperchikolaei teaches the computer-implemented method, comprising: receiving, via a user interface (UI), a selection of a requesting entity (see Raziperchikolaei: Fig.1, [0050], “”) identifying, in response to the selection, a set of users collectively representing the requesting entity (see Raziperchikolaei: Fig.1, [0050], “”) providing, via the UI, in response to the selection, an identity of at least one user of the set of users (see Raziperchikolaei: Fig.1, [0050], “”); the user vector representation including one or more user- specific features concatenated with an output of the attention layer (see Raziperchikolaei: Fig.4, [0060], “the intermediate output vector is created by concatenating all the lower-dimensional user and item vectors together. The lower-dimensional user and item vectors may be concatenated in any order, provided the order is the same in both the training and prediction phases. In other embodiments, the lower-dimensional user and item vectors are combined in more complicated ways, such the combination of concatenation and entry-wise product, which may increase accuracy of predictions, but also increase computational time and resources.”), the user specific features including a job of the user (see Raziperchikolaei: Fig.4, [0047], “the user data includes the user's past rating on items and the user's profile information, and the item data includes past ratings for the item from other users and item profile data. User profile data may include one or more of the following: user age, user location, user gender, user occupation, user income range, and user ethnicity. The user data may be provided by a user and/or derived by the system using machine learning. The item profile data may include one or more of the following: item description, item price, item category, and item image.”) generating, using the single, fully-connected neural network (see Raziperchikolaei: Fig.4, [0062], “The neural network encoders 420 may be any neural network that can receive a multidimensional input vector and generate a lower-dimensional representation of the vector. For example, the neural network encoders may be multilayer perceptron ( i.e., fully-connected neural network), a long short-term network (LSTM), or a convolutional network.”) , and based on the interactions for each user of the set of users between the user device and the system or platform associated with the providing entity (see Raziperchikolaei: Fig.4, [0061], “The single multidimensional vector representation of user and item data is then inputted to another neural network that maps the multidimensional vector representation of user and item data to a predicted user rating of the item (e.g., see prediction neural network 450 in FIG. 4) (step 240). This neural network is referred to herein as the “prediction neural network.” The output of the prediction neural network is the user's predicted rating for the item.”, … [0062] The prediction neural network 450 may be any neural network that can receive a multidimensional vector input and output a scalar value that is predictive of a user rating (e.g., a multilayer perceptron).”), a prediction parameter representing the prediction of the collaborative decision yet to be made by the set of users including the at least one user (see Raziperchikolaei: Fig.5, [0083], “The prediction module includes encoding neural networks 550, a single vector creation unit 560 that creates the single multidimensional vector representation of user and item data, and the prediction neural network 570. Training module 580 trains the prediction module 530 in accordance with the method of FIGS. 2A-B. The prediction module 530 makes predictions based on the method of FIG. 3. A recommendation module 540 receives the predicted user ratings and recommends items to users based on the predictions. For example, it may select products associated with the n highest predicted ratings after factoring in any applicable business rules. Those skilled in the art will appreciate that a recommendation system may have other modules that are not relevant to the disclosure herein.”) Because Zhao and Raziperchikolaei are in the same/similar field of endeavor of providing personalized recommendations for users, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify the teaching of Zhao to include the system that generate, using the single, fully-connected neural network, a prediction parameter representing the prediction of the collaborative decision yet to be made by the set of users including the at least one user as taught by Raziperchikolaei. One would have been motivated to make such a combination in order to provide enhancing analysis through linking and sharing information using knowledge and experience distributed among team members. (see Raziperchikolaei, [0003] ) Zhao and Raziperchikolaei does not teach the computer-implemented method, comprising: providing a message to the providing entity, the message including the prediction parameter; causing, for the providing entity, in response to the prediction parameter and in association with the system or platform of the providing entity, a transmission of a communication to the user device. However, Yen teaches the computer-implemented method, comprising: providing a message to the providing entity, the message including the prediction parameter (see Yen: Fig.1, [0071], “MM stores the knowledge and information that are shared by all the member of a team. The SMM implemented in R-CAST contains four components: team processes (including the RPD process), team structure, shared domain knowledge, and information-needs graphs. The shared domain knowledge may include inter-agent conversation protocols and social norms to follow, domain-specific inference knowledge, etc. An information-needs graph maintains a dynamic, progress-sensitive structure of teammates' information-needs, ensuring that only relevant information is delivered to the right entity at the right time.") causing, for the providing entity, in response to the prediction parameter and in association with the system or platform of the providing entity, a transmission of a communication to the user device (see Yen: Fig.1, [0071],“ The SMM Management module is responsible for updating and refining the SMM and may entail inter-agent communications to maintain cross-agent consistency of certain critical parts of team members' SMMs.”). (see Yen: Fig.1, [0074],“An agent may have multiple goals to pursue. An R-CAST agent uses the AM module to manage the attentions under its concern. For instance, based on the agent's situation assessment and cooperation requests from other agents, the agent may pay more attention to one goal, or suspend the pursuit of one goal and switch to another. More specifically, a team process may involve various kinds of decisions (e.g., working under multiple contexts). Since each decision task will trigger one RPD process, it is the AM's responsibility to effectively and carefully adjust the decision-maker agent's attentions on decision tasks.”) Because Zhao, Raziperchikolaei and Yen are in the same/similar field of endeavor of providing personalized recommendations for users, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention to modify the teaching of Zhao to include the system that provide providing a message to the providing entity and cause, a responsive actions in response to the prediction of the collaborative decision as taught by Yen. One would have been motivated to make such a combination in order to provide enhancing analysis through linking and sharing information using knowledge and experience distributed among team members. (see Yen, [0003] ) Regarding Claim 2, Zhao, Raziperchikolaei and Yen teaches all the limitations of Claim 1. Zhao further teaches the computer-implemented method wherein: generating the duration vector representation (see Zhao: Fig.2, [0042], “The interactive user behavior features 220 may be converted (generated) to high-dimensional time-stamped vectors (a duration vector representation), to represent user behaviors about how users interact with the product.”), further comprises: determining a frequency distribution of the one or more interactions between the user device operated by the user and the system or platform associated with the providing entity (see Zhao: Fig.3, [0043], “illustrates an example time-stamped historical user feature data structure in accordance with some embodiments. User feature data for each user may include company level static firmographics features and time-stamped interactive user behavior features (e.g., product usage information) collected by a data management system during a time period of past N days. Time-stamped interactive user behavior features of each day may include features of a certain day retention, invoice, expense, and other product usage information illustrated in FIG. 2.”), wherein the one or more interactions is associated with at least one activity type from a set of activity types (see Zhao: Fig.2, [0036], “illustrates example features of a user feature dataset structure 200 in accordance with some embodiments. Original user feature datasets 128 associated with a plurality of users may be acquired by a data management system in communication with the application server 120. The original user feature datasets 128 may include static firmographics features 210 and interactive user behavior features 220. Different features may be configured and represented in different data formats to show the respective user behavior information associated with a subscription to a product.”); and representing the duration vector representation as a vector having a length corresponding to a number activity types in the set of activity types (see Zhao: Fig.4A, [0042], The interactive user behavior features 220 may be converted to high-dimensional time-stamped vectors to represent user behaviors about how users interact with the product.”) Regarding Claim 3, Zhao, Raziperchikolaei and Yen teaches all the limitations of Claim 1. Zhao further teaches the computer-implemented method wherein: generating the duration vector representation (see Zhao: Fig.2, [0042], “The interactive user behavior features 220 may be converted (generated) to high-dimensional time-stamped vectors (a duration vector representation), to represent user behaviors about how users interact with the product.”), further comprises: for each interaction of the one or more interactions that occurred within the duration [0040], “The interactive user behavior features 220 may be stored in user feature database 127 or a separate database. The interactive user behavior features 220 may include a plurality of categories representing user behaviors or activities, such as “Product Usage”, “Attach” (e.g., financial behaviors) and “Care” as illustrated in FIG. 2.” ) generating an activity vector representation to numerically represent the interaction, the activity vector representation being generated by inputting the interaction into a fourth trained machine-learning model (see Zhao: Fig.4B, [0048], “At 504, the Bi-LSTMs neural network layers 412 may be trained to learn model weights to process the interactive user behavior features 220 to generate a contextual representation of the interactive user behavior features 220.”); inputting the activity vector representation for each interaction of the one or more interactions into another attention layer (see Zhao: Fig.4B, [0049], “At 506, as illustrated in FIG. 4B, the attention layer 414 may receive outputs of contextual representations of the interactive user behavior features 220 from the Bi-LSTMs neural network layers 412 and may learn to attenuate irrelevant modalities while amplifying the most informative modalities to extract relevant context from the contextual representation of the interactive user behavior features 220.”); and generating the duration vector representation using an output of another attention layer (see Zhao: Fig.4B, [0049], “The attention layer 414 may be used to improve model performance in terms of obtaining aggregated representations of any input text by focusing on different parts of the text differently. The attention layer 414 may be configured to output an embedding vector of a time distributed concatenation representation 416 of the interactive user behavior features 220. The time distributed concatenation representation 416 may represent embedding vectors of the interactive user features behavior 220.”) Regarding Claim 4, Zhao, Raziperchikolaei and Yen teaches all the limitations of Claim 1. Zhao further teaches the computer-implemented method wherein: the next duration is a future time period (see Zhao: Fig.2, [0026], “For example, an insight about user behaviors may be used to predict most possible actions in a particular future period for a segment of users.”), wherein the duration is a past time period (see Zhao: Fig.2, [0026], “User feature data for each user may include company level static firmographics features and time-stamped interactive user behavior features (e.g., product usage information) collected by a data management system during a time period of past N days.”), and wherein the collaborative decision that the set of users will collectively make on behalf of the requesting entity is determined on a rolling basis, such that at an end of the next duration, another prediction of the decision that the set of users will make on behalf of the requesting entity is determined for another next duration (see Zhao: Fig.4, [0070], “The DNN 420 illustrated in FIG. 4B may be trained with future retention as target (i.e. churn within the first 90 days since the service product signup date) to process features of user past activity (i.e. user QBO activity in first 30 days). For example, a combination of certain days' activities (e.g., day 2, 6, 8, 12 and 29 activity) may be fed into the DNN 420 for predicting 90-day churn rate to get an output from the second to last layer of the DNN 420. The output from the second to last layer may be used to train the clustering model 125 with an unsupervised algorithm to get the respective clusters or user segments based on the user features.”) Regarding Claim 5, Zhao, Raziperchikolaei and Yen teaches all the limitations of Claim 1. Zhao further teaches the computer-implemented method wherein: aggregating the output of the second trained machine-learning model associated with each user of the set of users (see Zhao: Fig.4A, [0055], “On a second to last layer of the trained deep neural network 420 (second trained machine-learning model), the outputs from the cross subnetwork 424 and the deep subnetwork 426 may be concatenated or stacked to generate a concatenated vector x.sub.stack 428 (entity vector representation). The values of the concatenated vector x.sub.stack 428 may be fed to a cluster model 125 which may be configured (aggregating ) to perform clustering on the concatenated vector x.sub.stack 428 to identify the user segments 430.”), further comprises: inputting the user vector representation for each user of the set of users and the one or more entity-specific features into a feedforward neural network (see Zhao: Fig.4B, [0054], “Deep subnetwork 426 may be a fully-connected feed-forward neural network and each deep layer of the deep subnetwork 426 may be represented by a function of equation.”); and generating the prediction of the collaborative decision that the set of users will collectively make on behalf of the requesting entity during the next duration (see Zhao: Fig.7, [0066], “At 706, based on the respective churn rate and churn size of each second user dataset, application server 120 may predict future retention levels and user features for each user segment during a near future time period. For example, different future retention levels may be assigned to the users based on certain thresholds associated with churn rates and churn sizes of user segments.”), the prediction being generated using an output of the feedforward neural network (see Zhao: Fig.4B, [0054], “Deep subnetwork 426 may be a fully-connected feed-forward neural network and each deep layer of the deep subnetwork 426 may be represented by a function of equation.”); As shown above, Zhao teaches cohort (collections of users) analysis by applying machine learning deep neural networks to process user behavior features for identifying user segments and providing recommendation and predictions for an individual user. Because the prediction is calculated from interaction data the is collected from multiple users, the resulting predicted outcome reflects a collaborative decision or performance inferred from collective behavior of users. Regarding Claim 7, Zhao, Raziperchikolaei and Yen teaches all the limitations of Claim 1. Zhao further teaches the computer-implemented method wherein: aggregating the output of the second trained machine-learning model associated with each user of the set of users (see Zhao: Fig.4A, [0055], “On a second to last layer of the trained deep neural network 420 (second trained machine-learning model), the outputs from the cross subnetwork 424 and the deep subnetwork 426 may be concatenated or stacked to generate a concatenated vector x.sub.stack 428 (entity vector representation). The values of the concatenated vector x.sub.stack 428 may be fed to a cluster model 125 which may be configured (aggregating ) to perform clustering on the concatenated vector x.sub.stack 428 to identify the user segments 430.”), further comprises: detecting a behavior performed by at least one user of the set of users (see Zhao: Fig.2, [0038], “The static firmographics features 220 may be stored in user feature database 127 or a separate database. The static firmographics features 210 may include a category of static firmographics features. The static firmographics features may include user type-based features to describe the products that users subscribe to use and user background information.”), the detection being based on the user vector representation of the at least one user (see Zhao: Fig.2, [0042], “The interactive user behavior features 220 may be converted to high-dimensional time-stamped vectors to represent user behaviors about how users interact with the product.”), and generating the prediction of the decision that the set of users will make on behalf of the requesting entity during the next duration, the prediction being generated based on the detection of the behavior performed by the at least one user (see Zhao: Fig.7, [0066], “At 706, based on the respective churn rate and churn size of each second user dataset, application server 120 may predict future retention levels and user features for each user segment during a near future time period. For example, different future retention levels may be assigned to the users based on certain thresholds associated with churn rates and churn sizes of user segments.”), Regarding independent Claim 8, Claim 8 is directed to a system claim and the claim has similar/same claim limitation as Claim 1 and is rejected under the same rationale. Regarding Claim 9, Claim 9 is directed to a system claim and the claim has similar/same limitations as Claim 2 and is rejected under the same rationale. Regarding Claim 10, Claim 10 is directed to a system claim and the claim has similar/same limitations as Claim 3 and is rejected under the same rationale. Regarding Claim 11, Claim 11 is directed to a system claim and the claim has similar/same limitations as Claim 4 and is rejected under the same rationale. Regarding Claim 12, Claim 12 is directed to a system claim and the claim has similar/same limitations as Claim 5 and is rejected under the same rationale. Regarding independent Claim 14, Claim 14 is directed to computer-implemented method claim and the claim has similar/same claim limitation as Claim 1 and is rejected under the same rationale. Regarding Claim 15, Claim 15 is directed to a computer-implemented method claim and the claim has similar/same limitations as Claim 2 and is rejected under the same rationale. Regarding Claim 16, Claim 16 is directed to a computer-implemented method claim and the claim has similar/same limitations as Claim 3 and is rejected under the same rationale. Regarding Claim 17, Claim 17 is directed to a computer-implemented method claim and the claim has similar/same limitations as Claim 4 and is rejected under the same rationale. Regarding Claim 18, Claim 18 is directed to a computer-implemented method claim and the claim has similar/same limitations as Claim 5 and is rejected under the same rationale. Regarding Claim 20, Claim 20 is directed to a computer-implemented method claim and the claim has similar/same limitations as Claim 7 and is rejected under the same rationale. Claims 6, 13 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhao, Raziperchikolaei and Yen as applied to claims 1-5,7-12, 14-18 and 20 as shown above and in further view of SINGH (Pub. No.: US 20210034949 Al, Pub. Date: 2021-02-04.) Regarding Claim 6, Zhao, Raziperchikolaei and Yen teaches all the limitations of Claim 1. Zhao further teaches the computer-implemented method wherein: aggregating the output of the second trained machine-learning model associated with each user of the set of users (see Zhao: Fig.4A, [0055], “On a second to last layer of the trained deep neural network 420 (second trained machine-learning model), the outputs from the cross subnetwork 424 and the deep subnetwork 426 may be concatenated or stacked to generate a concatenated vector x.sub.stack 428 (entity vector representation). The values of the concatenated vector x.sub.stack 428 may be fed to a cluster model 125 which may be configured (aggregating ) to perform clustering on the concatenated vector x.sub.stack 428 to identify the user segments 430.”), further comprises: inputting the user vector representation for each user of the set of users into a [ bidirectional LSTM (Bi-LSTMs] (see Zhao: Fig.4A, [0055], “On a second to last layer of the trained deep neural network 420, the outputs from the cross subnetwork 424 and the deep subnetwork 426 may be concatenated or stacked to generate a concatenated vector x.sub.stack 428. The values of the concatenated vector x.sub.stack 428 may be fed to a cluster model 125 which may be configured to perform clustering on the concatenated vector x.sub.stack 428 to identify the user segments 430.”)… [0048], neural network system 124 may include a neural network 410 and a deep neural network (DNN) 420. The neural network 410 may include multiple layers of bidirectional LSTM (Bi-LSTMs) 412 and an attention layer 414.”), including a number of hidden layers, wherein the number of hidden layers is configurable to select a probability represented by an output of the [ bidirectional LSTM (Bi-LSTMs], (see Zhao: Fig.6, [0054], “the application server 120 may apply a Bi-directional Long Short-Term Memory (Bi-LSTM) neural network to generate a contextual representation and an attention layer 414 to further generate an output as a time distributed concatenation representation 416. The time distributed concatenation representation 416 may represent embedding vectors of the interactive user features behavior 220.”); concatenating an output of the [ bidirectional LSTM (Bi-LSTMs] with the one or more entity-specific features (see Zhao: Fig.6, [0054], “the application server 120 may apply a Bi-directional Long Short-Term Memory (Bi-LSTM) neural network to generate a contextual representation and an attention layer 414 to further generate an output as a time distributed concatenation representation 416. The time distributed concatenation representation 416 may represent embedding vectors of the interactive user features behavior 220.”); inputting the output of the [bidirectional LSTM (Bi-LSTMs ] concatenated with the one or more entity-specific features into a feedforward neural network including a number of hidden layers, wherein the number of hidden layers is configurable to select a probability represented by an output of the feedforward neural network; (see Zhao: Fig.4B, [0059], “the cross subnetwork 424 and the deep subnetwork 426 may be two neural networks jointly being trained based on the retention labels with the same vector x.sub.0 to go through multiple deep learning layers till convergence. Each layer may produce high-order interactions based on values of the previous layer. Cross subnetwork 424 may be trained by conducting a linear cross feature combination of the vector x.sub.0. Deep subnetwork 426 may be a fully-connected feed-forward neural network as described above and may be trained with the same vector x.sub.0 till convergence. Deep subnetwork 426 may be configured to utilize a Rectified Linear Unit (ReLU) at each layer for processing the vector x.sub.0 to generate a vector representation of static data features.”); and generating the prediction of the [group or cohort] decision that the set of users will collectively make on behalf of the requesting entity during the next duration (see Zhao: Fig.7, [0066], “At 706, based on the respective churn rate and churn size of each second user dataset, application server 120 may predict future retention levels and user features for each user segment during a near future time period. For example, different future retention levels may be assigned to the users based on certain thresholds associated with churn rates and churn sizes of user segments.”), the prediction being generated using an output of the feedforward neural network (see Zhao: Fig.4B, [0054], “Deep subnetwork 426 may be a fully-connected feed-forward neural network and each deep layer of the deep subnetwork 426 may be represented by a function of equation.”); As shown above, Zhao discloses neural network system that include a neural network 410 and a deep neural network (DNN) 420. The neural network 410 may include multiple layers of bidirectional LSTM (Bi-LSTMs) 412 and an attention layer 414. Examiner note that GRUs are simplified version of LSTMs that use single “update gate” to control the flow of information into the memory cell. Zhao teaches cohort (group of users) analysis by applying machine learning deep neural networks to process user behavior features for identifying user segments and providing recommendation and predictions for an individual user. Zhao does not explicitly teaching is the many-to-one gated recurrent unit (GRU) in generation prediction decision. SINGH teaches the computer-implemented method that comprise many-to-one gated recurrent unit (GRU) in generation prediction decision (see SINGH: Fig.3, [0042], “RNN/GRU 304 may receive the time series data as training data, such that RNN/GRU 304 may perform as a pattern recognition engine. Thus, in operation, once trained, RNN/GRU 304 may monitor telemetry data from information handling resources of client information handling systems 102 and predict a failure status (e.g., failed, about to fail, healthy) based on pattern analysis of the telemetry data. Accordingly, RNN/GRU 304 may predict a failure of an information handling resource before it actually occurs. As explained in greater detail below, RNN/GRU 304 may be unable to handle any uneven time gaps in the sample or the time series of its training data, thus imputing missing data from the training data in order to perform training and prediction.”) Because Zhao, Raziperchikolaei, Yen and SINGH are in the same/similar field of endeavor of copy and paste operation and clipboard management, accordingly, it would have been obvious to a person of ordinary skill in the art, before the effective filing date of the invention, to modify the teaching of Zhao to include the system comprise many-to-one gated recurrent unit (GRU) in generation prediction decision as taught by SINGH. After modification of Zhao, the prediction decision model that use bidirectional LSTM (Bi-LSTMs) to generate prediction, can also apply many-to-one gated recurrent unit (GRU) in generation prediction decision as taught by SINGH. One would have been motivated to make such a combination in order to provide users with efficient machine learning model that have faster training times, easier to train and handle larger datasets. (see SINGH: [0004]) Regarding Claim 13, Claim 13 is directed to a system claim and the claim has similar/same limitations as Claim 6 and is rejected under the same rationale. Regarding Claim 19, Claim 19 is directed to a computer-implemented method claim and the claim has similar/same limitations as Claim 6 and is rejected under the same rationale. Response to Arguments Claim Rejections - 35 U.S.C. § 112(f), The interpretation to the claims under 35 U.S.C. § 112(f), is acknowledged by applicant and is maintained in the current rejection. Claim Rejections - 35 U.S.C. § 103, Applicant’s arguments with respect to claim amendments have been considered but are moot considering the new combination of references being used in the current rejection. The new combination of references was necessitated by Applicant’s claim amendments. Therefore, the claims are rejected under the new combination of references as indicated above. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. PGPUB NUMBER: INVENTOR-INFORMATION: TITLE / DESCRIPTION US 20220351021 A1 Biswas; Pratik K. Title: HYBRID RECOMMENDATION SYSTEM AND METHODS BASED ON COLLABORATIVE FILTERING INTEGRATED WITH MULTIPLE NEURAL NETWORKS Description: A hybrid recommendation system may generate recommendations, predictions, and/or classifications by applying collaborative filtering to influence Convolutional Neural Networks (“CNNs”), Recurrent Neural Networks (“RNNs”), and/or other neural networks that model characteristic, structural, sequential, contextual, interactive, and/or other relationships from interactions of different users.. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ZELALEM W SHALU whose telephone number is (571)272-3003. The examiner can normally be reached M- F 0800am- 0500pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Cesar Paula can be reached on (571) 272-4128. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Zelalem Shalu/Examiner, Art Unit 2145 /CESAR B PAULA/Supervisory Patent Examiner, Art Unit 2145
Read full office action

Prosecution Timeline

Feb 09, 2021
Application Filed
Jun 07, 2024
Non-Final Rejection — §103, §112
Jul 31, 2024
Interview Requested
Aug 15, 2024
Applicant Interview (Telephonic)
Aug 15, 2024
Examiner Interview Summary
Sep 10, 2024
Response Filed
Dec 14, 2024
Final Rejection — §103, §112
Jan 02, 2025
Interview Requested
Jan 22, 2025
Applicant Interview (Telephonic)
Jan 22, 2025
Examiner Interview Summary
Mar 07, 2025
Request for Continued Examination
Mar 14, 2025
Response after Non-Final Action
Mar 15, 2025
Non-Final Rejection — §103, §112
May 27, 2025
Applicant Interview (Telephonic)
Jun 04, 2025
Examiner Interview Summary
Jun 05, 2025
Response Filed
Sep 17, 2025
Final Rejection — §103, §112
Jan 13, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Mar 05, 2026
Non-Final Rejection — §103, §112
Apr 15, 2026
Interview Requested

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12477016
AUTOMATION OF VISUAL INDICATORS FOR DISTINGUISHING ACTIVE SPEAKERS OF USERS DISPLAYED AS THREE-DIMENSIONAL REPRESENTATIONS
2y 5m to grant Granted Nov 18, 2025
Patent 12468969
METHODS FOR CORRELATED HISTOGRAM CLUSTERING FOR MACHINE LEARNING
2y 5m to grant Granted Nov 11, 2025
Patent 12419611
PATIENT MONITOR, PHYSIOLOGICAL INFORMATION MEASUREMENT SYSTEM, PROGRAM TO BE USED IN PATIENT MONITOR, AND NON-TRANSITORY COMPUTER READABLE MEDIUM IN WHICH PROGRAM TO BE USED IN PATIENT MONITOR IS STORED
2y 5m to grant Granted Sep 23, 2025
Patent 12153783
User Interfaces and Methods for Generating a New Artifact Based on Existing Artifacts
2y 5m to grant Granted Nov 26, 2024
Patent 12120422
SYSTEMS AND METHODS FOR CAPTURING AND DISPLAYING MEDIA DURING AN EVENT
2y 5m to grant Granted Oct 15, 2024
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
29%
Grant Probability
48%
With Interview (+19.0%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 108 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month