Prosecution Insights
Last updated: April 19, 2026
Application No. 18/457,358

INFORMATION PROCESSING METHOD, INFORMATION PROCESSING APPARATUS, AND PROGRAM

Final Rejection §101§103§112
Filed
Aug 29, 2023
Examiner
LEE, JENNIFER V
Art Unit
3688
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Fujifilm Corporation
OA Round
2 (Final)
25%
Grant Probability
At Risk
3-4
OA Rounds
4y 3m
To Grant
67%
With Interview

Examiner Intelligence

Grants only 25% of cases
25%
Career Allow Rate
59 granted / 232 resolved
-26.6% vs TC avg
Strong +42% interview lift
Without
With
+41.5%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
28 currently pending
Career history
260
Total Applications
across all art units

Statute-Specific Performance

§101
30.1%
-9.9% vs TC avg
§103
32.6%
-7.4% vs TC avg
§102
13.5%
-26.5% vs TC avg
§112
21.8%
-18.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 232 resolved cases

Office Action

§101 §103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in reply to the communications filed on September 23, 2025. The Applicant’s Amendment and Request for Reconsideration has been received and entered. Claims 1-12 are currently pending and have been examined. Claims 1-5 and 10-12 have been amended. Response to Arguments Applicant’s amendments necessitated the new grounds of rejection. Regarding the rejection of claims 1-20 under 35 USC 101, Applicant’s arguments have been fully considered but they are not persuasive for the reasons set forth infra. Applicant’s remaining arguments have been fully considered but they are not persuasive. Particularly, Applicant’s arguments are directed to the instantly amended claims, and are thus moot in view of the new grounds of rejection. Claim Interpretation Note Claim 4 recites the limitation “adopting division of the subgroup in a case where a result of the evaluation satisfies a standard.” (emphasis added). However, the claims do not positively recite a result of the evaluation satisfying a standard. Accordingly, this limitation is merely conditional and not necessarily performed. Given the conditional nature of the claim language noted above, such language has been afforded appropriate patentable weight during examination. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-12 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA , the applicant, regards as the invention. Claims 1, 11 and 12 recites “dividing a dataset into a plurality of subgroups that are robust with respect to domain shift based on attribute data of the dataset, wherein the dataset includes a plurality of combinations of a plurality of users and a plurality of items, and each of the combinations represents a behavior of the user on the item, the attribute data of dataset includes at least one of attribute data of the user and attribute data of the item”. The term “robust” is a relative term which renders the claim indefinite. The term “robust” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. Consequently, one of ordinary skill in the art cannot determine how to avoid infringement of these claims because the metes and bounds of these claims are unclear. For examination purposes, Examiner has interpreted this claim as merely “dividing a dataset into a plurality of subgroups.” Claims 2-10 depend from claim 1 and thus inherit the deficiencies of claim 1. Claim 5 recites “causing the one or more processors to execute dividing the dataset into the plurality of subgroups by using attribute data having a relatively large difference in probability distribution with respect to the second attribute data.” (emphasis added). The term “relatively large” is a relative term which renders the claim indefinite. The term “relatively large” is not defined by the claim, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention. For examination purposes, Examiner has interpreted this claim as merely “causing the one or more processors to execute dividing the dataset into the plurality of subgroups by using attribute data.” Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Step 1. When considering subject matter eligibility under 35 U.S.C. 101, it must be determined whether the claim is directed to one of the four statutory categories of invention, i.e., process, machine, manufacture, or composition of matter. Step 2A – Prong One. If the claims fall within one of the statutory categories, it must then be determined whether the claims recite an abstract idea, law of nature, or natural phenomenon. Step 2A – Prong Two. If the claims recite an abstract idea, law of nature, or natural phenomenon, it must then be determined whether the claims recite additional elements that integrate the judicial exception into a practical application. If the claims do not recite additional elements that integrate the judicial exception into a practical application, then the claims are directed to a judicial exception. Step 2B. If the claims are directed to a judicial exception, it must be evaluated whether the claims recite additional elements that amount to an inventive concept (i.e. “significantly more”) than the recited judicial exception. In the instant case, claims 1-10 are directed to a process; claim 11 is directed to a machine; and claim 12 is directed to a manufacture. A claim “recites” an abstract idea if there are identifiable limitations that fall within at least one of the groupings of abstract ideas enumerated in MPEP 2106. In the instant case, claim 1, and similarly claims 11 and 12, recite the steps of: dividing a dataset into a plurality of subgroups that are robust based on attribute data of the dataset, wherein the dataset includes a plurality of combinations of a plurality of users and a plurality of items, and each of the combinations represents a behavior of the user on the item, the attribute data of dataset includes at least one of attribute data of the user and attribute data of the item; generating, for each of the subgroups, a local model for performing a prediction of the behavior of the user on the item, wherein the local model is trained by using on a portion of the dataset corresponding to one of the plurality of subgroups; and combining the local model of each of the plurality of subgroups to generate a combined model wherein refers to a case where a facility the local model and the generation of the combined model is different from another facility using the combined model, and data of the another facility is unknown during the local model and the generation of the combined model -- these claim limitations set forth certain methods of organizing human activity, particularly commercial interactions including advertising, marketing, and sales activities/behaviors. Additionally, these steps set forth mental processes, particularly concepts performed in the human mind, including, inter alia, the observation and evaluation of information. Further, the limitations of the claims are not indicative of integration into a practical application. Taking the claim elements separately, the additional elements of performing the steps via causing the one or more processors to execute, training a model, and with respect to domain shift -- merely implement the abstract idea on a computer environment. Considered in combination, the steps of Applicant’s method add nothing that is not already present when the steps are considered separately. The remaining claim limitations recited in dependent claims merely narrow the abstract idea and do not recite further additional elements. Thus, claims 1-12 are directed to an abstract idea. Regarding the independent claims, the technical elements of performing the steps via causing the one or more processors to execute and with respect to domain shift -- merely implement the abstract idea on a computer environment. While the claims recite training a model, these limitations are recited at a high level of generality and thus does not amount to significantly more. Additionally, the dependent claims do not recite further technical elements. When considering the elements and combinations of elements, the claim(s) as a whole, do not amount to significantly more than the abstract idea itself. This is because the claims do not amount to an improvement to another technology or technical field; the claims do not amount to an improvement to the functioning of a computer itself; the claims do not move beyond a general link of the use of an abstract idea to a particular technological environment; the claims merely amounts to the application or instructions to apply the abstract idea on a computer; or the claims amounts to nothing more than requiring a generic computer to perform generic computer functions that are well-understood, routine and conventional activities previously known to the industry. The analysis above applies to all statutory categories of invention. Accordingly, claims 1-12 are rejected as ineligible for patenting under 35 USC 101 based upon the same rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 2, and 4-12 are rejected under 35 U.S.C. 103 as being unpatentable over Li (CN 113761350 A) in view of Huang (CN 114399055 A). As per claim 1, Li teaches [a]n information processing method executed by one or more processors, comprising: causing the one or more processors to execute: (Li: (one or more processors; a memory for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the data recommendation method provided by embodiments of the present invention.)) dividing a dataset into a plurality of subgroups that are robust with respect to domain shift based on attribute data of the dataset, wherein the dataset includes a plurality of combinations of a plurality of users and a plurality of items, and each of the combinations represents a behavior of the user on the item, the attribute data of dataset includes at least one of attribute data of the user and attribute data of the item; (Li: (Data preprocessing: balancing the original data set by using a SMOTE (Synthetic Minrity Oversampling technique, namely a Synthetic Minority Oversampling technique); w←wt b ← mixing data set DnDivided into data blocks (batch) with size B of Batchsize . . . Where F is the ratio of the number of clients participating in each round of federal learning training, B is the local Batchsized number, BatchSize is the batch size, i.e., the number of samples selected per training, E is the number of local epochs, which represents the number of times training is performed using all samples in the training set. t is the index of the current round number, η is the learning rate, and a is the accuracy of the detection model, and can be calculated through a loss function.); (Step S101: the first client side trains a local first local recommendation model by using a local user behavior data set, and sends the first sharable data obtained by training to the server. The user behavior data set local to the first client may be a collection of one of: the data of user consumption behaviors, the data of user social behaviors and the data of user operation behaviors on the object. The object may be a commodity. . . the operation behavior data of the user on the object may be behavior data of browsing, paying attention to and purchasing goods and the like of the users in an e-commerce institution. . . . Step S102: the first client downloads global shared data from the server, the global shared data is calculated by the server according to the first sharable data and a second sharable data set, and the second sharable data set comprises one or more second sharable data sent by the second client to the server. The second client is also a client for data recommendation, and has the same function as the first client . . .)) generating, for each of the subgroups, a local model for performing a prediction of the behavior of the user on the item, wherein the local model is trained by using on a portion of the dataset corresponding to one of the plurality of subgroups; and (Li: (Step S101: the first client side trains a local first local recommendation model by using a local user behavior data set, and sends the first sharable data obtained by training to the server. The user behavior data set local to the first client may be a collection of one of: the data of user consumption behaviors, the data of user social behaviors and the data of user operation behaviors on the object. The object may be a commodity. . . the operation behavior data of the user on the object may be behavior data of browsing, paying attention to and purchasing goods and the like of the users in an e-commerce institution.); (Step S103: and the first client updates the first local recommendation model by using the global shared data); (The first local recommendation model is a recommendation model local to the first client. . . recommending object data matched with the corresponding user behavior data to respective users by the clients.)) combining the local model of each of the plurality of subgroups to generate a combined model, (Li: (The global recommendation model refers to a unified recommendation model jointly trained by the clients (the first client and the second client . . . and the first client updates the first local recommendation model by using the global shared data so as to train the global recommendation model, and recommends object data matched with the corresponding user behavior data to the user by using the global recommendation model after the training of the global recommendation model is completed.); Claim 5 (the method comprises the steps that a server receives sharable data sent by clients in a client set, wherein the sharable data are obtained by training respective local recommendation models by the clients through respective user behavior data sets; and the server calculates global shared data according to the sharable data, the global shared data is used for updating respective local recommendation models of the clients so as to train a global recommendation model, and the global recommendation model is used for recommending object data matched with the corresponding user behavior data to respective users by the clients.)) Li does not explicitly disclose the following known technique which is taught by Huang: wherein the domain shift refers to a case where a facility training the local model and the generation of the combined model is different from another facility using the combined model, and data of the another facility is unknown during the training of the local model and the generation of the combined model. (Huang: The distribution difference between the source domain data, as well as the distribution difference between the source domain and the unknown target domain data, causes the model to not have a good effect on the unknown target domain. How to train a model with good performance in the target domain on a multi-source domain is called the domain generalization problem. Specifically, learning a model that generalizes well to an unknown target domain on labeled data from multiple source domains is domain generalization. In order to improve the domain generalization performance, that is, the accuracy of the model's classification of the target domain data, the data of the multi-source domain is usually uploaded to the central server for training.); (Under the federated learning framework, the present invention learns a generalization model for an unknown target domain across domains based on distributed multi-source domain data, and the main steps are as follows: Step 1: The federated learning architecture includes distributed multiple clients and one server. The function of the client is to store the data of the source domain, train the local model on the data of the source domain, and exchange model parameters with the server. The function of the server is to aggregate and distribute the model parameters, and the aggregated global model can be used for the data of the unknown target domain. Among them, the data in the source domain in the client all carry labels, while the data in the target domain is unlabeled and does not participate in the training process. Step 2: Send the model parameters trained by the client to the server, the server calculates the average value to get the new model parameters, and distributes the new model parameters to all clients, and iterative training for multiple rounds until the client's model is on the local data set After the accuracy of the server has stabilized, the global model of the server is used for the target domain. Among them, the present invention proposes a training method for federated adversarial learning. At each client, the feature distributions of all source-domain data are indirectly aligned through local adversarial learning by aligning the feature distributions of the source-domain data to the same reference feature distribution. At this time, the server can learn an invariant feature extractor, which extracts data features that are also close to the reference feature distribution on the target domain, so as to reduce the difference between the source domain data and the target domain data. Therefore, the global model It generalizes well from multiple source domains to the target domain. In addition, in order to better predict the labels of the target domain data, the present invention proposes to train the classifier locally based on the extracted features and real labels on the client side, and through multiple rounds of iterative training with the server, the global classifier will be used for the target domain.)) This known technique is applicable to the method of Li as they both share characteristics and capabilities, namely, they are directed to training models. One of ordinary skill in the art at the time of filing would have recognized that applying the known technique of Huang would have yielded predictable results and resulted in an improved method. It would have been recognized that applying the technique of Huang to the teachings of Li would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such unknown facility features into similar methods. Further, applying the the domain shift refers to a case where a facility training the local model and the generation of the combined model is different from another facility using the combined model, and data of the another facility is unknown during the training of the local model and the generation of the combined model to the teachings of Li would have been recognized by those of ordinary skill in the art as resulting in an improved method that trains a deep learning model on an existing labeled dataset for an unlabeled dataset that does not participate in training. (Huang: Background technique). As per claim 2, Li teaches wherein the plurality of combinations further includes combinations of the plurality of users, the plurality of items, and a plurality of contexts. (Li: (The user behavior data set local to the first client may be a collection of one of: the data of user consumption behaviors, the data of user social behaviors and the data of user operation behaviors on the object. The object may be a commodity. . . the operation behavior data of the user on the object may be behavior data of browsing, paying attention to and purchasing goods and the like of the users in an e-commerce institution)) As per claim 4, Li teaches further comprising: causing the one or more processors to execute: (Li: (cause the one or more processors to implement the data recommendation method provided by embodiments of the present invention.)) performing an evaluation of robustness of the combined model, in which the plurality of local models are combined, with respect to the domain shift; and (Li: (Step S102: the first client downloads global shared data from the server, the global shared data is calculated by the server according to the first sharable data and a second sharable data set, and the second sharable data set comprises one or more second sharable data sent by the second client to the server. . . The global recommendation model refers to a unified recommendation model jointly trained by the clients (the first client and the second client). Step S103: and the first client updates the first local recommendation model by using the global shared data so as to train the global recommendation model, and recommends object data matched with the corresponding user behavior data to the user by using the global recommendation model after the training of the global recommendation model is completed. The first client updating the first local recommendation model with the global shared data may include: and the first client updates the parameters of the first local recommendation model and the loss function gradient of the first local recommendation model according to the parameters of the global recommendation model and the loss function gradient of the global recommendation model. The first client may update the parameters of the first local recommendation model by: the first client calculates the product of the learning rate of the first local recommendation model and the loss function gradient of the global recommendation model, and calculates the difference value between the current parameter of the first local recommendation model and the product to obtain the updated parameter of the first local recommendation model. The first client may update the gradient of the loss function of the first local recommendation model by: and the first client calculates the updated loss function gradient of the first local recommendation model according to the parameters of the global recommendation model and the updated parameters of the local first local recommendation model.); Claim 5) adopting division of the subgroup in a case where a result of the evaluation satisfies a standard. (Li: (Step S102: the first client downloads global shared data from the server, the global shared data is calculated by the server according to the first sharable data and a second sharable data set, and the second sharable data set comprises one or more second sharable data sent by the second client to the server. . . The global recommendation model refers to a unified recommendation model jointly trained by the clients (the first client and the second client). Step S103: and the first client updates the first local recommendation model by using the global shared data so as to train the global recommendation model, and recommends object data matched with the corresponding user behavior data to the user by using the global recommendation model after the training of the global recommendation model is completed. The first client updating the first local recommendation model with the global shared data may include: and the first client updates the parameters of the first local recommendation model and the loss function gradient of the first local recommendation model according to the parameters of the global recommendation model and the loss function gradient of the global recommendation model. The first client may update the parameters of the first local recommendation model by: the first client calculates the product of the learning rate of the first local recommendation model and the loss function gradient of the global recommendation model, and calculates the difference value between the current parameter of the first local recommendation model and the product to obtain the updated parameter of the first local recommendation model. The first client may update the gradient of the loss function of the first local recommendation model by: and the first client calculates the updated loss function gradient of the first local recommendation model according to the parameters of the global recommendation model and the updated parameters of the local first local recommendation model.); Claim 5) As per claim 5, Li teaches wherein the attribute data includes a first attribute data and a second attribute data, further comprising: causing the one or more processors to execute dividing the dataset into the plurality of subgroups by using the first attribute data having a relatively large difference in probability distribution with respect to the second d attribute data. (Li: (Data preprocessing: balancing the original data set by using a SMOTE (Synthetic Minrity Oversampling technique, namely a Synthetic Minority Oversampling technique); w←wt b ← mixing data set DnDivided into data blocks (batch) with size B of Batchsize . . . Where F is the ratio of the number of clients participating in each round of federal learning training, B is the local Batchsized number, BatchSize is the batch size, i.e., the number of samples selected per training, E is the number of local epochs, which represents the number of times training is performed using all samples in the training set. t is the index of the current round number, η is the learning rate, and a is the accuracy of the detection model, and can be calculated through a loss function.); (Step S101: the first client side trains a local first local recommendation model by using a local user behavior data set, and sends the first sharable data obtained by training to the server. The user behavior data set local to the first client may be a collection of one of: the data of user consumption behaviors, the data of user social behaviors and the data of user operation behaviors on the object. The object may be a commodity. . . the operation behavior data of the user on the object may be behavior data of browsing, paying attention to and purchasing goods and the like of the users in an e-commerce institution. . . . Step S102: the first client downloads global shared data from the server, the global shared data is calculated by the server according to the first sharable data and a second sharable data set, and the second sharable data set comprises one or more second sharable data sent by the second client to the server. The second client is also a client for data recommendation, and has the same function as the first client . . .)) As per claim 6, Li teaches further comprising: causing the one or more processors to execute dividing the dataset into the plurality of subgroups in which each user belongs to at least one or more of the subgroups. (Li: (Data preprocessing: balancing the original data set by using a SMOTE (Synthetic Minrity Oversampling technique, namely a Synthetic Minority Oversampling technique); w←wt b ← mixing data set DnDivided into data blocks (batch) with size B of Batchsize . . . Where F is the ratio of the number of clients participating in each round of federal learning training, B is the local Batchsized number, BatchSize is the batch size, i.e., the number of samples selected per training, E is the number of local epochs, which represents the number of times training is performed using all samples in the training set. t is the index of the current round number, η is the learning rate, and a is the accuracy of the detection model, and can be calculated through a loss function.); (Step S101: the first client side trains a local first local recommendation model by using a local user behavior data set, and sends the first sharable data obtained by training to the server. The user behavior data set local to the first client may be a collection of one of: the data of user consumption behaviors, the data of user social behaviors and the data of user operation behaviors on the object. The object may be a commodity. . . the operation behavior data of the user on the object may be behavior data of browsing, paying attention to and purchasing goods and the like of the users in an e-commerce institution. . . . Step S102: the first client downloads global shared data from the server, the global shared data is calculated by the server according to the first sharable data and a second sharable data set, and the second sharable data set comprises one or more second sharable data sent by the second client to the server. The second client is also a client for data recommendation, and has the same function as the first client . . .)) As per claim 7, Li teaches further comprising: causing the one or more processors to execute dividing the dataset into the plurality of subgroups in which at least a certain number or more of the items belong to each user. (Li: (The user behavior data set local to the first client may be a collection of one of: the data of user consumption behaviors, the data of user social behaviors and the data of user operation behaviors on the object. The object may be a commodity. . . the operation behavior data of the user on the object may be behavior data of . . . purchasing goods and the like of the users in an e-commerce institution)) As per claim 8, Li teaches further comprising: causing the one or more processors to execute generating different types of local models for each of the subgroups. (Li: (Step S101: the first client side trains a local first local recommendation model by using a local user behavior data set, and sends the first sharable data obtained by training to the server . . . Step S102: the first client downloads global shared data from the server, the global shared data is calculated by the server according to the first sharable data and a second sharable data set, and the second sharable data set comprises one or more second sharable data sent by the second client to the server. The second client is also a client for data recommendation, and has the same function as the first client, and at least one of the second client and the first client is a client of a different type, taking three clients as an example, wherein the first client is a client of a bank platform, and the two second clients are a client of a social platform and a client of an e-commerce platform.) Claim 5 (the method comprises the steps that a server receives sharable data sent by clients in a client set, wherein the sharable data are obtained by training respective local recommendation models by the clients through respective user behavior data sets)) As per claim 9, Li teaches further comprising: causing the one or more processors to execute: (Li: (cause the one or more processors to implement the data recommendation method provided by embodiments of the present invention.)) generating a pre-trained model based on a wider range of data than the subgroup in the dataset; and (Li: (Step S101: the first client side trains a local first local recommendation model by using a local user behavior data set, and sends the first sharable data obtained by training to the server. Step S102: the first client downloads global shared data from the server, the global shared data is calculated by the server according to the first sharable data and a second sharable data set, and the second sharable data set comprises one or more second sharable data sent by the second client to the server. . . . The global shared data includes parameters of the global recommendation model and a gradient of a penalty function of the global recommendation model. The global recommendation model refers to a unified recommendation model jointly trained by the clients (the first client and the second client)) generating the local model using the pre-trained model as an initial parameter. (Li: (Step S103: and the first client updates the first local recommendation model by using the global shared data. . . The first client updating the first local recommendation model with the global shared data may include: and the first client updates the parameters of the first local recommendation model and the loss function gradient of the first local recommendation model according to the parameters of the global recommendation model and the loss function gradient of the global recommendation model. . . . The first client may update the gradient of the loss function of the first local recommendation model by: and the first client calculates the updated loss function gradient of the first local recommendation model according to the parameters of the global recommendation model and the updated parameters of the local first local recommendation model.)) As per claim 10, Li teaches further comprising: causing the one or more processors to execute outputting an item list, which is suggested to the user, by using a model in which the plurality of local models are combined. (Li: (The global recommendation model refers to a unified recommendation model jointly trained by the clients (the first client and the second client . . . and the first client updates the first local recommendation model by using the global shared data so as to train the global recommendation model, and recommends object data matched with the corresponding user behavior data to the user by using the global recommendation model after the training of the global recommendation model is completed.); Claim 5 (the method comprises the steps that a server receives sharable data sent by clients in a client set, wherein the sharable data are obtained by training respective local recommendation models by the clients through respective user behavior data sets; and the server calculates global shared data according to the sharable data, the global shared data is used for updating respective local recommendation models of the clients so as to train a global recommendation model, and the global recommendation model is used for recommending object data matched with the corresponding user behavior data to respective users by the clients.)) As per claims 11 and 12, these claims recite limitations substantially similar to claim 1 and are therefore rejected in the same manner as this claim, as set forth above. Claims 3 are rejected under 35 U.S.C. 103 as being unpatentable over Li/Huang in view of Biswas (US PGP 2022/0351021). As per claim 3, Li/Huang teaches further comprising: causing the one or more processors to execute dividing the dataset . . . (Li: (Data preprocessing: balancing the original data set by using a SMOTE (Synthetic Minrity Oversampling technique, namely a Synthetic Minority Oversampling technique); w←wt b ← mixing data set DnDivided into data blocks (batch) with size B of Batchsize . . . Where F is the ratio of the number of clients participating in each round of federal learning training, B is the local Batchsized number, BatchSize is the batch size, i.e., the number of samples selected per training, E is the number of local epochs, which represents the number of times training is performed using all samples in the training set. t is the index of the current round number, η is the learning rate, and a is the accuracy of the detection model, and can be calculated through a loss function.); (Step S101: the first client side trains a local first local recommendation model by using a local user behavior data set, and sends the first sharable data obtained by training to the server. The user behavior data set local to the first client may be a collection of one of: the data of user consumption behaviors, the data of user social behaviors and the data of user operation behaviors on the object. The object may be a commodity. . . the operation behavior data of the user on the object may be behavior data of browsing, paying attention to and purchasing goods and the like of the users in an e-commerce institution. . . . Step S102: the first client downloads global shared data from the server, the global shared data is calculated by the server according to the first sharable data and a second sharable data set, and the second sharable data set comprises one or more second sharable data sent by the second client to the server. The second client is also a client for data recommendation, and has the same function as the first client . . .)) Li/Huang does not explicitly state the following known technique which is taught by Biswas: dividing the dataset based on at least one of attribute data of the user, attribute data of the item, or attribute data of the context (Biswas: [0025] (Hybrid recommendation system 101 may determine (at 110) preferences of the particular user via a separate modeling of the collective set of user information (e.g., the collective set of user interactions with different items) with the collected individual user information. For instance, the collective individual user information may identify the particular user's interests, the integrated modeling may identify other users with similar interests as the particular user, and, from the integrated modeling, hybrid recommendation system 101 may determine (at 110) the preferences of the particular user based on the preferences that are identified for the users with similar interests. In some embodiments, hybrid recommendation system 101 may determine (at 110) the preferences of the particular user by performing a collaborative filtering of the user-item interactions from the collective set of user information using individual preferences extracted from the collected individual user information.); [0058]-[0061] (Process 600 may include modeling (at 608) the different combinations of the user, item, and contextual information. Modeling (at 608) the different combinations of the user, item, and contextual information may include determining the probability that the different contextual information combinations resulted in different outcomes. For instance, the modeling (at 608) may identify that a first set of users in a first region and with an age range between 30-55 have a 65% probability of purchasing a first item, whereas a second set of users in a second region and with an age range between 16-29 have a 80% probability of purchasing a second item.)) This known technique is applicable to the method of Li/Huang as they both share characteristics and capabilities, namely, they are directed to combining models for product recommendations. One of ordinary skill in the art at the time of filing would have recognized that applying the known technique of Biswas would have yielded predictable results and resulted in an improved method. It would have been recognized that applying the technique of Biswas to the teachings of Li/Huang would have yielded predictable results because the level of ordinary skill in the art demonstrated by the references applied shows the ability to incorporate such attribute data features into similar methods. Further, applying the dividing the dataset based on at least one of attribute data of the user, attribute data of the item, or attribute data of the context to the dividing the dataset of Li/Huang would have been recognized by those of ordinary skill in the art as resulting in an improved method that may increase the engagement, sales, and/or other interactions the users have with sites and/or platforms. (Biswas: Para [0056]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure: Bhosle (US Pat No 11,416,909) – enable information sharing between multiple disparate electronic marketplaces provided by various merchants. Linden (US PGP 2008/0250026) monitors and reports information regarding browsing activities of users across multiple web sites. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENNIFER V LEE whose telephone number is (571)272-4778. The examiner can normally be reached Monday - Friday 9AM - 5PM EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, JEFFREY A. SMITH can be reached at (571)272-6763. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENNIFER V LEE/Examiner, Art Unit 3688 /Jeffrey A. Smith/Supervisory Patent Examiner, Art Unit 3688
Read full office action

Prosecution Timeline

Aug 29, 2023
Application Filed
Jun 23, 2025
Non-Final Rejection — §101, §103, §112
Aug 14, 2025
Interview Requested
Aug 21, 2025
Examiner Interview Summary
Aug 21, 2025
Examiner Interview (Telephonic)
Sep 23, 2025
Response Filed
Jan 29, 2026
Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602716
SYSTEMS AND METHODS FOR GENERATING RECOMMENDATIONS BASED ON ONLINE HISTORY INFORMATION AND GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12548069
METHOD, SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM FOR AUGMENTED REALITY-BASED FACE AND CLOTHING EFFECT
2y 5m to grant Granted Feb 10, 2026
Patent 12541782
METHOD, SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM FOR PRODUCT OBJECT PUBLISHING AND CONCURRENT IMAGE RECOGNITION
2y 5m to grant Granted Feb 03, 2026
Patent 12461976
METHOD AND SYSTEM FOR CAPTURING DATA FROM REQUESTS TRANSMITTED ON WEBSITES
2y 5m to grant Granted Nov 04, 2025
Patent 12462289
DEVICE, METHOD, AND COMPUTER-READABLE MEDIA FOR RECOMMENDATION NETWORKING BASED ON CONNECTIONS, CHARACTERISTICS, AND ASSETS USING MACHINE LEARNING
2y 5m to grant Granted Nov 04, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
25%
Grant Probability
67%
With Interview (+41.5%)
4y 3m
Median Time to Grant
Moderate
PTA Risk
Based on 232 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month