DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Application
This office action is in response to the most recent amendments and arguments filed by applicant on 10/08/24. Applicants have filed a preliminary amendment on the same day as the original claims. The amended claims are considered to be the most recent filings:
Claims 1 and 7 are amended
Claim 12 is cancelled
No claims are added
Claims 1-11 are pending
Note:
Claim 1 recites “receive the input data regarding a the predetermined consumption behavior” the claim is ambiguous because it is unclear if it is “a predetermined consumption behavior” or “the predetermined consumption behavior”? It is assumed that the claim recites “the predetermined consumption behavior” since it is referencing the limitations in the claim limitations above. Appropriate corrections are required for clarification.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-11 of the current application are provisionally rejected on the ground of non-statutory double patenting as being unpatentable over Claims 1-10 and 12 of co-pending Application No. 18/864,085 (reference application).
Although the claims at issue are not identical, they are not patentably distinct from each other because the claim limitations recited in Independent Claims 1 and 11 of the instant application recite substantially similar limitations and are obvious variants of each other when comparing Independent Claims 1 and 11 of co-pending Application No. 18/864,085 (reference application). The claims of the instant application are narrower and would read on the broader version of the claims in the referenced co-pending Application No. 18/864,085 (reference application). The claims at issue are not patentably distinct from each other due to “wherein the federated learning model is generated such that at least a part of local learning models generated for a plurality of different business operators are federated by learning a relationship between a plurality of customer groups respectively generated from business customer data owned by the business operators and consumption behaviors corresponding to the business operators…” recited in Independent Claims 1 and 11 of the current application overlap with co-pending Application No. 18/864,085 (reference application). The above cited claims are only slightly different than what’s recited in Independent Claims 1 and 12 of the co-pending application: “A federated learning model generation apparatus comprising: local learning model acquisition means for acquiring a plurality of different local learning models that have learned a relationship between a plurality of customer groups respectively generated from business customer data owned by a plurality of business operators and consumption behaviors corresponding to the business operators; and federated learning model generation means for receiving a predetermined consumption behavior of a customer as input data by federating at least a part of the acquired local learning models, and generating a federated learning model that outputs prospective customer data for the input data.”
This is a provisional non-statutory double patenting rejection because the patentably indistinct claims have not in fact been patented. The claims of the instant application are broader and would read on the narrower version of the claims in co-pending Application No. 18/854,990 (reference application) US PG Pub (US 2025/0245688 A1).
Examiner notes that elimination of an element or its functions is deemed to be obvious in light of prior art teachings of at least the recited element or its functions (see in re Karlson, 136 USPQ 184, 186; 311 F2d 581 (CCPA 1963)), thereby rendering the elimination of any elements recited in the claims of the referenced patent (that are not recited in the instant claims) obvious.
Accordingly, one of ordinary skill in the art would have recognized the slight differences between the claim language / limitations of the corresponding claims as being directed towards intention, slight various in terminology, or obvious variants of similar claim elements, and therefore these claims are not patentably distinct from one another despite these slight differences.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-11 is/are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., an abstract idea) without significantly more.
Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the claims 1-9 is/are directed to a apparatus which is a statutory category.
Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the dependent claim 10 is/are directed to a system which is a statutory category.
Step One - First, pursuant to step 1 in the January 2019 Guidance on 84 Fed. Reg. 53, the claims 11 is/are directed to a method which is a statutory category.
Step 2A Prong 1: Identify the Abstract Idea(s)
The Alice framework, steps 2A-Prong One (part 1 of Mayo Test), here, the claims are analyzed to determine if the claims are directed to a judicial exception. MPEP 2106.04(a). In determining, whether the claims are directed to a judicial exception, the claims are analyzed to evaluate whether the claims recite a judicial exception (Prong One of Step 2A), and whether the claims recite additional elements that integrate the judicial exception into a practical application (Prong Two of Step 2A). See 2019 Revised Patent Subject Matter Eligibility Guidance (“PEG” 2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50-57 (Jan. 7, 2019)).
Under the 2019 PEG, Step 2A under which a claim is not “directed to” a judicial exception unless the claim satisfies a two-prong inquiry. Further, particular groupings of abstract ideas are consistent with judicial precedent and are based on an extraction and synthesis of the key concepts identified by the courts as being abstract.
Independent claims 1, with respect to the Step 2A, Prong One, when “taken as a whole” the claims as drafted, and given their broadest reasonable interpretation, fall within the Abstract idea grouping of “certain methods of organizing human activity” (business relations; relationships or interactions between people). For instance, independent Apparatus Claim 1 is directed to an abstract idea, as evidenced by claim limitations “receiving input data regarding a predetermined consumption behavior; generated for a plurality of different business operators are federated by learning a relationship between a plurality of customer groups respectively generated from business customer data owned by the business operators and consumption behaviors corresponding to the business operators, and set to be capable of outputting prospective customer data for the input data; and outputting the prospective customer data.”
Claim 10, with respect to the Step 2A, Prong One, when “taken as a whole” the claims as drafted, and given their broadest reasonable interpretation, fall within the Abstract idea grouping of “certain methods of organizing human activity” (business relations; relationships or interactions between people). For instance, System Claim 10 is directed to an abstract idea, as evidenced by claim limitations “generate the local learning model for each of the business operators on a basis of customer data managed by each of the business operators; control, in a case where the local learning model is updated, acquisition of the updated local learning model; and update the federated learning model by using the acquired local learning model.”
Independent claims 11, with respect to the Step 2A, Prong One, when “taken as a whole” the claims as drafted, and given their broadest reasonable interpretation, fall within the Abstract idea grouping of “certain methods of organizing human activity” (business relations; relationships or interactions between people). For instance, independent Method Claim 11 is directed to an abstract idea, as evidenced by claim limitations “receiving input data regarding a predetermined consumption behavior; supplying the input data to a plurality of different local learning models each having learned a relationship between business customer data and the consumption behavior on a basis of the business customer data owned by a predetermined business operator; receiving prospective customer data for the consumption behavior as an output for the input data; and outputting the received prospective customer data.”
Applicants’ originally submitted specification recites in [0008]: If there is an abundance of excellent data that serves as a basis for conducting marketing regarding behavior of a customer to purchase a product or behavior of using a service (that is, a predetermined consumption behavior), business operators can easily utilize this data for marketing. In this regard, a method can be considered which uses each other's customer data among a plurality of business operators. However, personal information of customers cannot be shared among the plurality of business operators. [0009] In view of the problems described above, an object of the present disclosure is to provide a technique that easily and suitably contributes to marketing activities.
In light of the specification, these claim limitations belong to the grouping of “certain methods of organizing human activity” because the claims are related to managing marketing activities for one or more human entities involves organizing human activity based on the description of “certain methods of organizing human activity” provided by the courts. The court have used the phrase “Certain methods of organizing human activity” as —fundamental economic principles or practices (including hedging, insurance, mitigating risk); commercial or legal interactions (including agreements in the form of contracts; legal obligations; advertising, marketing or sales activities or behaviors; business relations); managing personal behavior or relationships or interactions between people (including social activities, teaching, and following rules or instructions).
Independent Claims 10 is/are recites substantially similar limitations to independent claim 1 and is/are rejected under 2A for similar reasons to claim 1 above.
Step 2A Prong 2: Additional Elements That Integrate the Judicial Exception into a Practical Application
With respect to the Step 2A, Prong Two - This judicial exception is not integrated into a practical application. In particular, the claim recites additional elements:
Independent claim 1: “A data processing apparatus comprising: a federated learning model generated such that at least a part of local learning models, input means for, output means for”
Claim 10: “A data processing system comprising: a local data processing apparatus configured to, an update control unit configured to, the data processing apparatus configured to”
Independent claim 11: “A data processing method of causing a computer to perform: a federated learning model generated by federating, from the federated learning model” and “A non-transitory computer-readable medium having stored thereon a program that causes a computer to perform a data processing method comprising:”
at a high level of generality such that it amounts to no more than: Adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). Accordingly, these additional elements do not integrate the abstract idea into a practical application because they do not impose any meaningful limits on practicing the abstract idea. The claims are directed to an abstract idea with no significantly more elements.
Thus, the additional elements do not integrate the abstract idea into practical application because they do not impose any meaningful limitations on practicing the abstract idea. As a result, claims 1, 11 do not provide any specifics regarding the integration into a practical application when recited in a claim with a judicial exception. See MPEP 2106.05(f).
The additional elements of a “machine learning model”. This language merely requires execution of an algorithm that can be performed by a generic computer component and provides no detail regarding the operation of that algorithm. As such, the claim requirement amounts to mere instructions to implement the abstract idea on a computer, and, therefore, is not sufficient to make the claim patent eligible. See Alice, 573 U.S. at 226 (determining that the claim limitations “data processing system,” “communications controller,” and “data storage unit” were generic computer components that amounted to mere instructions to implement the abstract idea on a computer); October 2019 Guidance Update at 11–12 (recitation of generic computer limitations for implementing the abstract idea “would not be sufficient to demonstrate integration of a judicial exception into a practical application”). Such a generic recitation of “machine learning model” is insufficient to show a practical application of the recited abstract idea. All of these additional elements are not significantly more because these, again, are merely the software and/or hardware components used to implement the abstract idea on a general-purpose computer.
Similarly dependent claims 2-9 are also directed to an abstract idea under 2A, first and second prong. In the present application, all of the dependent claims have been evaluated and it was found that they all inherit the deficiencies set forth with respect to the independent claims. For instance, dependent claims 2 recites “wherein the federated learning model outputs the prospective customer data including at least a part of the plurality of customer groups”. Dependent claims 5 recites “wherein, when receiving a consumption behavior of a business operator as an input, the federated learning model is generated by federating the local learning models that output the customer groups corresponding to the consumption behavior” In this claim, “federated learning model” is an additional element, but it is still being recited such that it amounts to no more than: adding the words “apply it” (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer, or merely uses a computer as a tool to perform an abstract idea, as discussed in MPEP 2106.05(f). As a result, Examiner asserts that dependent claims, such as dependent claims 2-9 are also directed to the abstract idea identified above.
Step 2B: Determine Whether Any Element, Or Combination, Amount to “Significantly More” Than the Abstract Idea Itself
With respect to Step 2B, the claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. First, the invention lacks improvements to another technology or technical field [see Alice at 2351; 2019 IEG at 55], and lacks meaningful limitations beyond generally linking the use of an abstract idea to a particular technological environment [Alice at 2360, 2019 IEG at 55], and fails to effect a transformation or reduction of a particular article to a different state or thing [2019 IEG, 55]. For the reasons articulated above, the claims recite an abstract idea that is limited to a particular field of endeavor (MPEP § 2106.05(h)) and recites insignificant extra-solution activity (MPEP § 2106.05(g)). By the factors and rationale provided above with respect to these MPEP sections, the additional elements of the claims that fail to integrate the abstract idea into a practical application also fail to amount to “significantly more” than the abstract idea.
As discussed above with respect to integration of the abstract idea into a practical application, the additional element(s) of:
Independent claim 1: “A data processing apparatus comprising: a federated learning model generated such that at least a part of local learning models, input means for, output means for”
Claim 10: “A data processing system comprising: a local data processing apparatus configured to, an update control unit configured to, the data processing apparatus configured to”
Independent claim 11: “A data processing method of causing a computer to perform: a federated learning model generated by federating, from the federated learning model” and “A non-transitory computer-readable medium having stored thereon a program that causes a computer to perform a data processing method comprising:”
are insufficient to amount to significantly more. Applicants originally submitted specification describes the computer components above at least in page/ paragraph [0018], [0025]-[0033]. In light of the specification, it should be noted that the components discussed above did not meaningfully limit the abstract idea because they merely linked the use of the abstract idea to a particular technological environment (i.e., "implementation via computers"). In light of the specification, it should be noted that the claim limitations discussed above are merely instructions to implement the abstract idea on a computer. See MPEP 2106.05(f). (See MPEP 2106.05(f) - Mere Instructions to Apply an Exception - “Thus, for example, claims that amount to nothing more than an instruction to apply the abstract idea using a generic computer do not render an abstract idea eligible.” Alice Corp., 134 S. Ct. at 235). Mere instructions to apply an exception using computer component cannot provide an inventive concept.). The additional elements amount to no more than a recitation of generic computer elements utilized to perform generic computer functions, such as performing repetitive calculations, Bancorp Services v. Sun Life, 687 F.3d 1266, 1278, 103 USPQ2d 1425, 1433 (Fed. Cir. 2012) ("The computer required by some of Bancorp’s claims is employed only for its most basic function, the performance of repetitive calculations, and as such does not impose meaningful limits on the scope of those claims."); and storing and retrieving information in memory, Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; see MPEP 2106.05(d)(II).
Therefore, the claims at issue do not require any nonconventional computer, network, or display components, or even a “non-conventional and non-generic arrangement of know, conventional pieces,” but merely call for performance of the claimed on a set of generic computer components” and display devices. All of these additional elements are significantly more because these, again, are merely the software and/or hardware components used to implement the abstract idea on a general-purpose computer. Generically recited computer elements do not add a meaningful limitation to the abstract idea because the Alice decision noted that generic structures that merely apply abstract ideas are not significantly more than the abstract ideas.
The computing elements with a computing device is recited at high level of generality (e.g. a generic device performing a generic computer function of processing data). Thus, this step is no more than mere instructions to apply the exception on a generic computer. In addition, using a processor to process data has been well- understood routing, conventional activity in the industry for many years. Generic computer features, such as system or storage, do not amount to significantly more than the abstract idea. These limitations merely describe implementation for the invention using elements of a general-purpose system, which is not sufficient to amount to significantly more. See, e.g., Alice Corp., 134 S. Ct. 2347, 110 USPQ2d 1976; Versata Dev. Group, Inc. v. SAP Am. Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1791 (Federal Circuit 2015).
Applicants originally submitted specification describes the computer components above at least in [0018], [0025]-[0033]. In light of the specification, it should be noted that the computer components identified above are well-understood, routine, conventional activities previously known to the industry (see 2106.05(d)).
The claim fails to recite any improvements to another technology or technical field, improvements to the functioning of the computer itself, use of a particular machine, effecting a transformation or reduction of a particular article to a different state or thing, adding unconventional steps that confine the claim to a particular useful application, and/or meaningful limitations beyond generally linking the use of an abstract idea to a particular environment. See 84 Fed. Reg. 55. Viewed individually or as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself.
Independent Claims 11 is/are recite substantially similar limitations to independent claim 1 and is/are rejected under 2B for similar reasons to claim 1 above.
Further, it should be noted that additional elements of the claimed invention such as claim limitations when considered individually or as an ordered combination along with the other limitations discussed above in method claim 11 also do not meaningfully limit the abstract idea because they merely linked the use of the abstract idea to a particular technological environment (i.e., "implementation via computers"). In light of the specification, it should be noted that the claim limitations discussed above are merely instructions to implement the abstract idea on a computer. See MPEP 2106.
Similarly, dependent claims 2-9 also do not include limitations amounting to significantly more than the abstract idea under the second prong or 2B of the Alice framework. In the present application, all of the dependent claims have been evaluated and it was found that they all inherit the deficiencies set forth with respect to the independent claims. Further, it should be noted that the dependent claims do not include limitations that overcome the stated assertions. Here, the dependent claims recite features/limitations that include computer components identified above in part 2B of analysis of independent claims 1, 11. As a result, Examiner asserts that dependent claims, such as claims 2-9 are also directed to the abstract idea identified above.
Further, Examiner notes that the addition limitations, when considered as an ordered combination, add nothing that is not already present when looking at the additional elements individually.
For more information on 101 rejections, see MPEP 2106, January 2019 Guidance at https://www.govinfo.gov/content/pkg/FR-2019-01 -07/pdf/2018-28282.pdf
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-6 and 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over (US 12417411 B2) Sharma et al., and further in view of (US 2022/0156786 A1) Collet et al.
As per claims 1, 10-11: Regarding the claim limitations below, Reference Sharma in view of Collet shows:
A data processing apparatus, system, method comprising (Sharma, col. 1, lines 50-64: Another embodiment of the invention or elements thereof can be implemented in the form of a computer program product tangibly embodying computer readable instructions which, when implemented, cause a computer to carry out a plurality of method steps, as described herein. Furthermore, another embodiment of the invention or elements thereof can be implemented in the form of a system including a memory and at least one processor that is coupled to the memory and configured to perform noted method steps. Yet further, another embodiment of the invention or elements thereof can be implemented in the form of means for carrying out the method steps described herein, or elements thereof; the means can include hardware module(s) or a combination of hardware and software modules, wherein the software modules are stored in a tangible computer-readable storage medium (or multiple such media):
Regarding the claim limitations below, Reference Sharma in view of Collet shows:
at least one memory storing instructions and a federated learning model, wherein the federated learning model is generated such that at least a part of local learning models generated for a plurality of different business operators are federated by learning a relationship between a plurality of customer groups respectively generated from business customer data owned by the business operators and consumption behaviors corresponding to the business operators, and is set to be capable of outputting prospective customer data for input data regarding a predetermined consumption behavior
Regarding the claim above, Sharma shows “at least one memory storing instructions and a federated learning model, wherein the federated learning model is generated such that at least a part of local learning models generated for a plurality of different business operators are federated by learning…, and is set to be capable of outputting prospective customer data for input data regarding a predetermined consumption behavior” in col. 7, lines 33-47: Step 304 includes detecting one or more federated learning environment-level outliers from at least a portion of the multiple client systems by processing at least a portion of the obtained local outlier-related data using one or more artificial intelligence models. In at least one embodiment, processing at least a portion of the obtained local outlier-related data using one or more artificial intelligence models includes processing at least a portion of the obtained local outlier-related data using one or more neural networks. In such an embodiment, using the one or more neural networks includes training a number of multiple outlier parameters equal to a sum of a total number of data points to client systems and a total number of client systems. Further, in at least one embodiment, using the one or more artificial intelligence models can include implementing one or more multilayer perceptrons.
However, Sharma does not explicitly show “a relationship between a plurality of customer groups respectively generated from business customer data owned by the business operators and consumption behaviors corresponding to the business operators”. Reference Collet shows the above limitation at least in Collet: [0035], [0041], [0064-0066], [0106]: Collet discusses that this learning from data for other campaigns and/or from the current campaign can be very powerful. For example, especially if an entity may have a hundred other campaigns already executed from which much can be learned. See also Collet: [0007]: “The audience including at least a particular customer or a potential customer; based at least on behavior data, training a model to learn a personalized frequency for sending the electronic communications to the particular customer or the potential customer.” See also Collet at [0035]: The “state” of each customer, given by the entirety of a customer's historical behavioral data (behaviors 308 in the example in FIG. 3, which can also be “observe states” 410 in FIG. 4). See also Collet at [0041]: Customers (or groups of customers) are modeled, at least in part, by assigning them a state (“S”). This state may be in general determined by a customer's historical behavioral data. In exemplary operation (see especially FIG. 4), after taking an action (“A”), a reward (e.g., click/no click/unsubscribe) is observed, and, because the customer has had time to interact with the electronic communication or website, it is consequently found that the customer is in a new state S′. See also Collet at [0106]: Demographic and profile information for the particular customer may be used in the determination of the value function and resultant best send time. These additional online activities may be used to group people together and find a common value function for people that show similar behavioral patterns (e.g., browsing the same website at the same time during the day. see at least Collet: [0066], [0072], Fig. 7. Collet teaches that data is combined (both on a customer and a campaign level) across different entities (also referred to herein as partners). Using the combined data, the model may still use campaign level data to predict how well campaigns will do and combine it with customer data to select the best frequency for each customer/potential customer. However, by using data from different partners, can transfer some of the learnings between different partners. For example, if a particular partner has never sent a Black Friday electronic communication before, but much data has been collected on other partners' Black Friday electronic communications and there is knowledge that this type of campaign tends to perform well, this insight can be used in some embodiments to predict that the given partner's campaign will also do well. See also Collet at [0072]: “Behavior data can include, in addition to data regarding past campaigns, new data concerning actions or inaction of the current customer of the campaign, e.g., opening the electronic communication, past clicks on the electronic communication, and associated purchases made. In addition, the behavior data could also include what the customer is clicking on within a website, purchase actions, placing a product or service in a cart online, browsing from a general webpage having a number of products to a webpage for a particular product, putting an item in the cart but not purchasing, unsubscribing or otherwise blocking future electronic communications, and other available data”.
Reference Sharma and Reference Colet are analogous prior art to the claimed invention because the references generally relate to field of learning customer behavior using federated learning models (Sharma: in col. 7, lines 33-47, see above. Collet: [0049]. Collet teaches that the models 306 and 404 may include, but is not limited to, utilizing techniques such as least squares policy iteration, random forests, Q-learning, Bayesian models, support vector machines (SVM), federated learning, or neural networks.). Said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Collet, particularly the ability of the federated model to incorporate analyzing the relationship between customer group data and customer consumption behavior ([0035], [0041], [0066]-[0072]), in the disclosure of Reference Sharma, particularly in the ability federated learning models to analyze outliers in environmental learning and local learning (Sharma: in col. 7, lines 33-47), in order to provide for a system that dynamically and automatically determines a frequency for electronic communications for a campaign that is personalized for each individual customer through the configuration of the campaign as taught by Reference Colet (see at least in [0005]) so that the process of determining of at least one calibration parameter for detecting federated learning environment-level outliers based at least in part on the one or more detected federated learning environment-level outliers, and outputting the at least one determined calibration parameter to at least a portion of the multiple client systems within the federated learning environment in Reference Sharma (see at least in col. 1, lines 40-47) can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar learning customer behavior using federated learning models field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Sharma in view of Reference Collet, the results of the combination were predictable (MPEP 2143 A);
Regarding the claim limitations below, Reference Sharma in view of Collet shows:
at least one processor configured to execute the instructions to receive the input data regarding the predetermined consumption behavior (Sharma, col. 18, lines 46-67: Another embodiment of the invention or elements thereof can be implemented in the form of a computer program product tangibly embodying computer readable instructions which, when implemented, cause a computer to carry out a plurality of method steps, as described herein. Furthermore, another embodiment of the invention or elements thereof can be implemented in the form of a system including a memory and at least one processor that is coupled to the memory and configured to perform noted method steps. Yet further, another embodiment of the invention or elements thereof can be implemented in the form of means for carrying out the method steps described herein, or elements thereof; the means can include hardware module(s) or a combination of hardware and software modules, wherein the software modules are stored in a tangible computer-readable storage medium (or multiple such media).;
Regarding the claim limitations below, Reference Sharma in view of Collet shows:
receiving prospective customer data for the consumption behavior as an output for the input data from the federated learning model; and
Sharma does not explicitly show “receiving prospective customer data for the consumption behavior as an output for the input data from the federated learning model; and”. Reference Collet shows the above limitation at least in Collet: [0035], [0041], [0064-0066], [0106]: Collet discusses that this learning from data for other campaigns and/or from the current campaign can be very powerful. For example, especially if an entity may have a hundred other campaigns already executed from which much can be learned. See also Collet: [0007]: “The audience including at least a particular customer or a potential customer; based at least on behavior data, training a model to learn a personalized frequency for sending the electronic communications to the particular customer or the potential customer.” See also Collet at [0035]: The “state” of each customer, given by the entirety of a customer's historical behavioral data (behaviors 308 in the example in FIG. 3, which can also be “observe states” 410 in FIG. 4). See also Collet at [0041]: Customers (or groups of customers) are modeled, at least in part, by assigning them a state (“S”). This state may be in general determined by a customer's historical behavioral data. In exemplary operation (see especially FIG. 4), after taking an action (“A”), a reward (e.g., click/no click/unsubscribe) is observed, and, because the customer has had time to interact with the electronic communication or website, it is consequently found that the customer is in a new state S′. See also Collet at [0106]: Demographic and profile information for the particular customer may be used in the determination of the value function and resultant best send time. These additional online activities may be used to group people together and find a common value function for people that show similar behavioral patterns (e.g., browsing the same website at the same time during the day. see at least Collet: [0066], [0072], Fig. 7. Collet teaches that data is combined (both on a customer and a campaign level) across different entities (also referred to herein as partners). Using the combined data, the model may still use campaign level data to predict how well campaigns will do and combine it with customer data to select the best frequency for each customer/potential customer. However, by using data from different partners, can transfer some of the learnings between different partners. For example, if a particular partner has never sent a Black Friday electronic communication before, but much data has been collected on other partners' Black Friday electronic communications and there is knowledge that this type of campaign tends to perform well, this insight can be used in some embodiments to predict that the given partner's campaign will also do well. See also Collet at [0072]: “Behavior data can include, in addition to data regarding past campaigns, new data concerning actions or inaction of the current customer of the campaign, e.g., opening the electronic communication, past clicks on the electronic communication, and associated purchases made. In addition, the behavior data could also include what the customer is clicking on within a website, purchase actions, placing a product or service in a cart online, browsing from a general webpage having a number of products to a webpage for a particular product, putting an item in the cart but not purchasing, unsubscribing or otherwise blocking future electronic communications, and other available data”.
Reference Sharma and Reference Colet are analogous prior art to the claimed invention because the references generally relate to field of learning customer behavior using federated learning models (Sharma: in col. 7, lines 33-47, see above. Collet: [0049]. Collet teaches that the models 306 and 404 may include, but is not limited to, utilizing techniques such as least squares policy iteration, random forests, Q-learning, Bayesian models, support vector machines (SVM), federated learning, or neural networks.). Said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Collet, particularly the ability of the federated model to incorporate analyzing the relationship between customer group data and customer consumption behavior ([0035], [0041], [0066]-[0072]), in the disclosure of Reference Sharma, particularly in the ability federated learning models to analyze outliers in environmental learning and local learning (Sharma: in col. 7, lines 33-47), in order to provide for a system that dynamically and automatically determines a frequency for electronic communications for a campaign that is personalized for each individual customer through the configuration of the campaign as taught by Reference Colet (see at least in [0005]) so that the process of determining of at least one calibration parameter for detecting federated learning environment-level outliers based at least in part on the one or more detected federated learning environment-level outliers, and outputting the at least one determined calibration parameter to at least a portion of the multiple client systems within the federated learning environment in Reference Sharma (see at least in col. 1, lines 40-47) can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar learning customer behavior using federated learning models field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Sharma in view of Reference Collet, the results of the combination were predictable (MPEP 2143 A);
Regarding the claim limitations below, Reference Sharma in view of Collet shows:
a local data processing apparatus configured to generate the local learning model for each of the business operators on a basis of customer data managed by each of the business operators (Sharma: Col. 7, lines 25-46: FIG. 3 is a flow diagram illustrating techniques according to an embodiment of the present invention. Step 302 includes obtaining local outlier-related data from multiple client systems within a federated learning environment. In one or more embodiments, the local outlier-related data include data pertaining to solutions to one or more local loss functions determined by the multiple client systems. Step 304 includes detecting one or more federated learning environment-level outliers from at least a portion of the multiple client systems by processing at least a portion of the obtained local outlier-related data using one or more artificial intelligence models. In at least one embodiment, processing at least a portion of the obtained local outlier-related data using one or more artificial intelligence models includes processing at least a portion of the obtained local outlier-related data using one or more neural networks. In such an embodiment, using the one or more neural networks includes training a number of multiple outlier parameters equal to a sum of a total number of data points to client systems and a total number of client systems. Further, in at least one embodiment, using the one or more artificial intelligence models can include implementing one or more multilayer perceptrons);
Regarding the claim limitations below, Reference Sharma in view of Collet shows:
an update control unit configured to control, in a case where the local learning model is updated, acquisition of the updated local learning model (Sharma: col. 3, lines 29-45: More specifically, as illustrated in FIG. 1, each node 102 generates and provides to server 105 (particularly, to model update collection 107) one or more outlier analysis updates derived from its respective local outlier analysis (carried out by module 103). At least a portion of the updates contained within collection 107 are then provided to global OA module 109, wherein global outlier analysis is carried out (as further detailed herein). Based at least in part on such global outlier analysis, global OA module 109 generates and outputs, to the local OA module 103 of each node 102, one or more dynamic feedback signals. In one or more embodiments, the feedback signal sent by the server may include the outlierness measures (e.g., at least one real number) assigned to clients based at least in part on the updates shared by the clients. Such a client, upon receiving the signal, recalibrates its local outlierness measure. As detailed herein, at least one embodiment includes incorporating a global outlier analysis. In such an embodiment, once the server aggregates and/or pools updates from the clients {θ′.sub.1, θ′.sub.2, . . . , θ′.sub.n}, the server performs a global outlier analysis and determines which updates are problematic (e.g., updates derived from outlier data points). Additionally, in connection with aggregating the updates, the server discounts the contribution of the problematic updates. Accordingly, in one or more embodiments, the aggregation strategy can be modeled and thereby solved as an optimization problem.); and
Regarding the claim limitations below, Reference Sharma in view of Collet shows:
output the prospective customer data on a basis of the federated learning model (Sharma: col. 3, lines 29-45: More specifically, as illustrated in FIG. 1, each node 102 generates and provides to server 105 (particularly, to model update collection 107) one or more outlier analysis updates derived from its respective local outlier analysis (carried out by module 103). At least a portion of the updates contained within collection 107 are then provided to global OA module 109, wherein global outlier analysis is carried out (as further detailed herein). Based at least in part on such global outlier analysis, global OA module 109 generates and outputs, to the local OA module 103 of each node 102, one or more dynamic feedback signals. In one or more embodiments, the feedback signal sent by the server may include the outlierness measures (e.g., at least one real number) assigned to clients based at least in part on the updates shared by the clients. Such a client, upon receiving the signal, recalibrates its local outlierness measure). Col. 3, lines 64-67 and col. 4, lines 1-5: At least a portion of such a solution is output to server 205, which uses such an output in conjunction with at least a portion of the validation data contained within database 216 to solve at least one global objective and apply at least one aggregate. By way of illustration, in one or more embodiments, a server aggregates the information received from all clients and applies the aggregated information to the global model.)
As per claim 2: Regarding the claim limitations below, Reference Sharma in view of Collet shows:
wherein the federated learning model outputs the prospective customer data including at least a part of the plurality of customer groups.
Claim interpretation: the above claim is ambiguous. It is unclear what “at least a part of the plurality of customer groups” means? Does this limitation mean the consumption behavior of the customer groups, or the number of customer groups, or the names or information of customer groups? For the purposes of this office action, this claim term is understood as information about the plurality of customer groups.
Sharma does not explicitly show the above limitation. Collet shows the above limitation at least in Fig. 8 & [0074]-[0076]: Collet teaches that the computer system 800 in FIG. 8 includes one or more processor unit(s) 810 and main memory 820. [0049] & [0069]-[0070]: [0041]-[0048], [0062], [0098]. Collet teaches that the model may also be optimized for: offline data (e.g., in-store visits); product returns data; user demographic data (e.g., age, location, gender); client-specific user data (e.g., loyalty status, applied for client credit card); client business goals (e.g., sell-through goals, inventory constraints, margin goals); and/or product margin data. See also Collet at [0041]: Customers (or groups of customers) are modeled, at least in part, by assigning them a state (“S”). This state may be in general determined by a customer's historical behavioral data. See also Collet at [0098]: Customer-proper attributes: characteristics of the customer including but not limited to age, gender, location (e.g., zip code), and/or product or campaign affinities of the intended recipient. Behavior: including but not limited to online and offline behavior.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Collet, particularly the ability of the federated model to incorporate analyzing the relationship between customer group data and customer consumption behavior ([0035], [0041], [0066]-[0072]), in the disclosure of Reference Sharma, particularly in the ability federated learning models to analyze outliers in environmental learning and local learning (Sharma: in col. 7, lines 33-47), in order to provide for a system that dynamically and automatically determines a frequency for electronic communications for a campaign that is personalized for each individual customer through the configuration of the campaign as taught by Reference Colet (see at least in [0005]) so that the process of determining of at least one calibration parameter for detecting federated learning environment-level outliers based at least in part on the one or more detected federated learning environment-level outliers, and outputting the at least one determined calibration parameter to at least a portion of the multiple client systems within the federated learning environment in Reference Sharma (see at least in col. 1, lines 40-47) can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar learning customer behavior using federated learning models field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Sharma in view of Reference Collet, the results of the combination were predictable (MPEP 2143 A).
As per claim 3: Regarding the claim limitations below, Reference Sharma in view of Collet shows:
wherein the federated learning model outputs the prospective customer data including an estimated purchase index indicating a tendency of the consumption behaviors of the customer groups.
Sharma does not explicitly show the above limitation. Collet shows the above limitation at least in Fig. 8 & [0074]-[0076]: Collet teaches that the computer system 800 in FIG. 8 includes one or more processor unit(s) 810 and main memory 820. [0049] & [0069]-[0070]: [0041]-[0048], [0062], [0098], [0106] & [0114]. Collet notes that for each customer, the methods and systems determine an estimate of the value of each time unit (probability to engage at each time unit for that person). This can provide a value function (defined over the time period) for each customer that would indicate how valuable sending a communication at a certain time would be for that person. This value function can also depend on a broader date context, for instance, which day of the week or which week/month of the year the date is in. The customer state may be defined by other attributes including but not limited to the value of the campaign(s) to which the customer is eligible; customer-proper attributes (such as age, gender, and product affinities to name just a few); customer online/offline activity, and optional attributes such as how late/early it is in the day, how many communications in each strategy are left, etc. Based on the values for the attributes, the value of sending/not sending is estimated and a decision is made based on the estimate. This decision effectively selects one of the strategies, which will then be followed (if possible, depending on eligibility, for instance) throughout the day. See also Collet at [0025] & [0098]: This target audience may be defined by various conditional statements about customer's past behavior or attributes, for example, past open/click/purchase behavior, whether the customer added items to their cart, their predicted lifetime value, affinity to a certain product, etc. Customer-proper attributes: characteristics of the customer including but not limited to age, gender, location (e.g., zip code), and/or product or campaign affinities of the intended recipient.).
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Collet, particularly the ability of the federated model to incorporate analyzing the relationship between customer group data and customer consumption behavior ([0035], [0041], [0066]-[0072]), in the disclosure of Reference Sharma, particularly in the ability federated learning models to analyze outliers in environmental learning and local learning (Sharma: in col. 7, lines 33-47), in order to provide for a system that dynamically and automatically determines a frequency for electronic communications for a campaign that is personalized for each individual customer through the configuration of the campaign as taught by Reference Colet (see at least in [0005]) so that the process of determining of at least one calibration parameter for detecting federated learning environment-level outliers based at least in part on the one or more detected federated learning environment-level outliers, and outputting the at least one determined calibration parameter to at least a portion of the multiple client systems within the federated learning environment in Reference Sharma (see at least in col. 1, lines 40-47) can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar learning customer behavior using federated learning models field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Sharma in view of Reference Collet, the results of the combination were predictable (MPEP 2143 A).
As per claim 4: Regarding the claim limitations below, Reference Sharma in view of Collet shows:
wherein, when receiving a customer group as an input, the federated learning model is generated by federating the local learning models that output the consumption behaviors corresponding to the customer group.
Sharma does not explicitly show the above limitation. Collet shows the above limitation at least in Fig. 8 & [0074]-[0076]: Collet teaches that the computer system 800 in FIG. 8 includes one or more processor unit(s) 810 and main memory 820. [0049] & [0069]-[0070]: [0041]-[0048], [0062], [0098], [0106] & [0114]. [0035] & [0041] & [0106] & Fig. 3. Collet teaches that customers (or groups of customers) are modeled, at least in part, by assigning them a state (“S”). This state may be in general determined by a customer's historical behavioral data. In exemplary operation (see especially FIG. 4), after taking an action (“A”), a reward (e.g., click/no click/unsubscribe) is observed, and, because the customer has had time to interact with the electronic communication or website, it is consequently found that the customer is in a new state S′. Using this data, the model can find an optimal policy, which is a set of rules that map each state S to the best action A to take, in order to maximize the future reward. Demographic and profile information for the particular customer may be used in the determination of the value function and resultant best send time. These additional online activities may be used to group people together and find a common value function for people that show similar behavioral patterns (e.g., browsing the same website at the same time during the day). See also Collet at Fig. 3 noting 308 “behaviors” and Fig. 6 noting 610 “behaviors”.)
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Collet, particularly the ability of the federated model to incorporate analyzing the relationship between customer group data and customer consumption behavior ([0035], [0041], [0066]-[0072]), in the disclosure of Reference Sharma, particularly in the ability federated learning models to analyze outliers in environmental learning and local learning (Sharma: in col. 7, lines 33-47), in order to provide for a system that dynamically and automatically determines a frequency for electronic communications for a campaign that is personalized for each individual customer through the configuration of the campaign as taught by Reference Colet (see at least in [0005]) so that the process of determining of at least one calibration parameter for detecting federated learning environment-level outliers based at least in part on the one or more detected federated learning environment-level outliers, and outputting the at least one determined calibration parameter to at least a portion of the multiple client systems within the federated learning environment in Reference Sharma (see at least in col. 1, lines 40-47) can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar learning customer behavior using federated learning models field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Sharma in view of Reference Collet, the results of the combination were predictable (MPEP 2143 A).
As per claim 5: Regarding the claim limitations below, Reference Sharma in view of Collet shows:
wherein, when receiving a consumption behavior of a business operator as an input, the federated learning model is generated by federating the local learning models that output the customer groups corresponding to the consumption behavior.
Sharma does not explicitly show the above limitation. Collet shows the above limitation at least in Fig. 8 & [0074]-[0076]: Collet teaches that the computer system 800 in FIG. 8 includes one or more processor unit(s) 810 and main memory 820. [0049] & [0069]-[0070]: [0041]-[0048], [0062], [0098], [0106] & [0114]. [0041-0043] & [0072-0073]. Collet teaches that the model can optimize for certain data such as purchase data related to actual purchases by the recipient of the electronic communications. The model can also optimize for click data e.g., a lifetime value of clicks, which is defined as the total expected number of clicks for a customer during the entire time he/she remains subscribed to the electronic communication list. See also Collet at Fig. 7 & [0072-0073]: Behavior data can include, in addition to data regarding past campaigns, new data concerning actions or inaction of the current customer of the campaign, e.g., opening the electronic communication, past clicks on the electronic communication, and associated purchases made. In addition, the behavior data could also include what the customer is clicking on within a website, purchase actions, placing a product or service in a cart online, browsing from a general webpage having a number of products to a webpage for a particular product, putting an item in the cart but not purchasing, unsubscribing or otherwise blocking future electronic communications, and other available data.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Collet, particularly the ability of the federated model to incorporate analyzing the relationship between customer group data and customer consumption behavior ([0035], [0041], [0066]-[0072]), in the disclosure of Reference Sharma, particularly in the ability federated learning models to analyze outliers in environmental learning and local learning (Sharma: in col. 7, lines 33-47), in order to provide for a system that dynamically and automatically determines a frequency for electronic communications for a campaign that is personalized for each individual customer through the configuration of the campaign as taught by Reference Colet (see at least in [0005]) so that the process of determining of at least one calibration parameter for detecting federated learning environment-level outliers based at least in part on the one or more detected federated learning environment-level outliers, and outputting the at least one determined calibration parameter to at least a portion of the multiple client systems within the federated learning environment in Reference Sharma (see at least in col. 1, lines 40-47) can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar learning customer behavior using federated learning models field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Sharma in view of Reference Collet, the results of the combination were predictable (MPEP 2143 A).
As per claim 6: Regarding the claim limitations below, Reference Sharma in view of Collet shows:
wherein the federated learning model is generated by federating at least a part of features extracted for each of the local learning models.
(Sharma: Col. 7, lines 25-46: FIG. 3 is a flow diagram illustrating techniques according to an embodiment of the present invention. Step 302 includes obtaining local outlier-related data from multiple client systems within a federated learning environment. In one or more embodiments, the local outlier-related data include data pertaining to solutions to one or more local loss functions determined by the multiple client systems. Step 304 includes detecting one or more federated learning environment-level outliers from at least a portion of the multiple client systems by processing at least a portion of the obtained local outlier-related data using one or more artificial intelligence models. In at least one embodiment, processing at least a portion of the obtained local outlier-related data using one or more artificial intelligence models includes processing at least a portion of the obtained local outlier-related data using one or more neural networks. In such an embodiment, using the one or more neural networks includes training a number of multiple outlier parameters equal to a sum of a total number of data points to client systems and a total number of client systems. Further, in at least one embodiment, using the one or more artificial intelligence models can include implementing one or more multilayer perceptrons).
Claim(s) 7-9 is/are rejected under 35 U.S.C. 103 as being unpatentable over (US 12417411 B2) Sharma et al., further in view of (US 2022/0156786 A1) Collet et al. and (WO 2022214699 A1) Fenton et al.
As per claim 7: Regarding the claim limitations below, Reference Sharma in view of Collet shows:
wherein the federated learning model is generated by federating the local learning models learned on a basis of the business customer data including a common region commonly possessed by the plurality of business operators and a unique region possessed by each of the plurality of business operators.
Sharma in view of Collet does not explicitly show “including a common region commonly possessed by the plurality of business operators and a unique region possessed by each of the plurality of business operators” in the above limitation. Fenton shows the above limitation at least in [0017], [0026-0027], [0064-0065]. Fenton teaches training one or more models or executing one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation for the consumer data sets. Method 200 in FIG. 2 may include at step 210 creating a common representation, such as by automated feature generation, at step 220 describing one or more target groups, such as by training one or more models or execute one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation of the consumer data. [0017], [0026]-[0027], [0064-0065]. Fenton teaches a sub-system for automated feature generation to create a common representation across one or more consumer and producer data sets, a describer sub-system that includes training one or more models or executing one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation for the consumer data sets, a finder sub-system that highlights likely candidates for the specified subjects across the common representations of the one or more producer data sets, the performing of analytics for each producer data set, and output the analytics result. See also Fenton at [0026]-[0027]: “Creating a common representation, such as by automated feature generation, at step 220 describing one or more target groups, such as by training one or more models or execute one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation of the consumer data, evaluate the finder at step 230 by executing the queries on the producer data set(s) to identify likely candidates for the specified input data subjects in each producer data set, perform analytics over the identified subjects for each producer data set at step 240, and output the analytics results at step 250”. See also Fenton at [0064]-[0065]: Describer models available for use by any number of other data controllers to overlay insights upon their data and perform combined analytics via a two-sided marketplace. Produce and/or consume analytical insights from other data sets, self-service capabilities are offered to allow analysts at data controllers to create new common representations, describer models and analytical outputs. See also Fenton at Tables 1-2.
Reference Sharma and Reference Fenton are analogous prior art to the claimed invention because the references generally relate to field of learning customer behavior using federated learning models (Sharma: in col. 7, lines 33-47, see above. Federated: [0027]. Further, said references are part of the same classification, i.e., G06Q30 and G06N20. Said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Fenton, particularly the ability of the federated model for providing the ability to use anonymous groups to analyze disparate data sets via the use of either individual to segment or segment to segment matching using data modeling or querying ([0005]), in the disclosure of Reference Sharma, particularly in the ability federated learning models to analyze outliers in environmental learning and local learning (Sharma: in col. 7, lines 33-47), in order to provide for the ability to use anonymous groups to analyze disparate data sets via the use of either individual to segment or segment to segment matching using data modeling or querying as taught by Reference Fenton (see at least in [0005]) so that the process of determining of at least one calibration parameter for detecting federated learning environment-level outliers based at least in part on the one or more detected federated learning environment-level outliers, and outputting the at least one determined calibration parameter to at least a portion of the multiple client systems within the federated learning environment in Reference Sharma (see at least in col. 1, lines 40-47) can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar learning customer behavior using federated learning models field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Sharma in view of Reference Fenton, the results of the combination were predictable (MPEP 2143 A).
As per claim 8: Regarding the claim limitations below, Reference Sharma in view of Collet shows:
wherein the federated learning model is generated by federating the local learning models learned on a basis of the business customer data including the customer group as the common region.
Sharma in view of Collet does not explicitly show “including the customer group as the common region.” in the above limitation. Fenton shows the above limitation at least in Fenton: [0034]-[0035], Fig. 2: Fenton teaches that the features may be based on demographics or other data subject characteristics common to both data sets at step 219. Combinations of the respective forms may also be utilized in creating a detailed feature array in step 213. For example, one or more of geo-spatial features 214, temporal features 215, spending behaviors 216, product or brand affinities 218 and demographics 219 may be utilized. See also Fenton at [0020], [0027] for “learning”. Also, see [0017], [0026-0027], [0064-0065]. Fenton teaches training one or more models or executing one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation for the consumer data sets. Method 200 in FIG. 2 may include at step 210 creating a common representation, such as by automated feature generation, at step 220 describing one or more target groups, such as by training one or more models or execute one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation of the consumer data. [0017], [0026]-[0027], [0064-0065]. Fenton teaches a sub-system for automated feature generation to create a common representation across one or more consumer and producer data sets, a describer sub-system that includes training one or more models or executing one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation for the consumer data sets, a finder sub-system that highlights likely candidates for the specified subjects across the common representations of the one or more producer data sets, the performing of analytics for each producer data set, and output the analytics result. See also Fenton at [0026]-[0027]: “Creating a common representation, such as by automated feature generation, at step 220 describing one or more target groups, such as by training one or more models or execute one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation of the consumer data, evaluate the finder at step 230 by executing the queries on the producer data set(s) to identify likely candidates for the specified input data subjects in each producer data set, perform analytics over the identified subjects for each producer data set at step 240, and output the analytics results at step 250”. See also Fenton at [0064]-[0065]: Describer models available for use by any number of other data controllers to overlay insights upon their data and perform combined analytics via a two-sided marketplace. Produce and/or consume analytical insights from other data sets, self-service capabilities are offered to allow analysts at data controllers to create new common representations, describer models and analytical outputs. See also Fenton at Tables 1-2.
Reference Sharma and Reference Fenton are analogous prior art to the claimed invention because the references generally relate to field of learning customer behavior using federated learning models (Sharma: in col. 7, lines 33-47, see above. Federated: [0027]. Further, said references are part of the same classification, i.e., G06Q30 and G06N20. Said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Fenton, particularly the ability of the federated model for providing the ability to use anonymous groups to analyze disparate data sets via the use of either individual to segment or segment to segment matching using data modeling or querying ([0005]), in the disclosure of Reference Sharma, particularly in the ability federated learning models to analyze outliers in environmental learning and local learning (Sharma: in col. 7, lines 33-47), in order to provide for the ability to use anonymous groups to analyze disparate data sets via the use of either individual to segment or segment to segment matching using data modeling or querying as taught by Reference Fenton (see at least in [0005]) so that the process of determining of at least one calibration parameter for detecting federated learning environment-level outliers based at least in part on the one or more detected federated learning environment-level outliers, and outputting the at least one determined calibration parameter to at least a portion of the multiple client systems within the federated learning environment in Reference Sharma (see at least in col. 1, lines 40-47) can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar learning customer behavior using federated learning models field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Sharma in view of Reference Fenton, the results of the combination were predictable (MPEP 2143 A).
As per claim 9: Regarding the claim limitations below, Reference Sharma in view of Collet shows:
wherein the federated learning model is generated by federating the local learning models each having, as the common region, data in which customers are classified into a plurality of groups as the customer group on a basis of a predetermined attribute regarding each customer.
Sharma in view of Collet does not explicitly show “each having, as the common region, data in which customers are classified into a plurality of groups as the customer group on a basis of a predetermined attribute regarding each customer” in the above limitation. Fenton shows the above limitation at least in Fenton: [0043]-[0044], [0047]-0048], Figs. 2-6. Fenton teaches that categories may be defined for each attribute. For instance, if RFM is the select common representation, recency may be broken into three categories: customers with purchases within the last 90 days; between 91 and 365 days; and longer than 365 days. Such categories may be derived from business rules, domain knowledge, industry standards or using data mining techniques to find meaningful breaks. Once each of the attributes has appropriate categories defined, features are created from the intersection of the values. If there were three categories for each attribute, then the resulting matrix may have twenty-seven possible combinations. Companies may also decide to collapse certain sub-features, if the gradations appear too small to be useful. See also Fenton [0047]-[0048]: When multiple input groups are involved, the finder may score each data subject for membership in each group using the common representation. The scoring may be performed for example by determining the correlation of the data subject’s attributes with the values / categories in each input group. The scoring may be performed in any number of ways. [0034]-[0035], Fig. 2: Fenton teaches that the features may be based on demographics or other data subject characteristics common to both data sets at step 219. Combinations of the respective forms may also be utilized in creating a detailed feature array in step 213. For example, one or more of geo-spatial features 214, temporal features 215, spending behaviors 216, product or brand affinities 218 and demographics 219 may be utilized. See also Fenton at [0020], [0027] for “learning”. Also, see [0017], [0026-0027], [0064-0065]. Fenton teaches training one or more models or executing one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation for the consumer data sets. Method 200 in FIG. 2 may include at step 210 creating a common representation, such as by automated feature generation, at step 220 describing one or more target groups, such as by training one or more models or execute one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation of the consumer data. [0017], [0026]-[0027], [0064]-[0065]. Fenton teaches a sub-system for automated feature generation to create a common representation across one or more consumer and producer data sets, a describer sub-system that includes training one or more models or executing one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation for the consumer data sets, a finder sub-system that highlights likely candidates for the specified subjects across the common representations of the one or more producer data sets, the performing of analytics for each producer data set, and output the analytics result. See also Fenton at [0026]-[0027]: “Creating a common representation, such as by automated feature generation, at step 220 describing one or more target groups, such as by training one or more models or execute one or more queries optimized to recognize the behavior of the specified subjects within the generated common representation of the consumer data, evaluate the finder at step 230 by executing the queries on the producer data set(s) to identify likely candidates for the specified input data subjects in each producer data set, perform analytics over the identified subjects for each producer data set at step 240, and output the analytics results at step 250”. See also Fenton at [0064]-[0065]: Describer models available for use by any number of other data controllers to overlay insights upon their data and perform combined analytics via a two-sided marketplace. Produce and/or consume analytical insights from other data sets, self-service capabilities are offered to allow analysts at data controllers to create new common representations, describer models and analytical outputs. See also Fenton at Tables 1-2.
Reference Sharma and Reference Fenton are analogous prior art to the claimed invention because the references generally relate to field of learning customer behavior using federated learning models (Sharma: in col. 7, lines 33-47, see above. Federated: [0027]. Further, said references are part of the same classification, i.e., G06Q30 and G06N20. Said references are filed before the effective filing date of the instant application; hence, said references are analogous prior-art references.
It would have been obvious to one of ordinary skill in the art before the effective filing date of this application for AIA to provide the teachings of Reference Fenton, particularly the ability of the federated model for providing the ability to use anonymous groups to analyze disparate data sets via the use of either individual to segment or segment to segment matching using data modeling or querying ([0005]), in the disclosure of Reference Sharma, particularly in the ability federated learning models to analyze outliers in environmental learning and local learning (Sharma: in col. 7, lines 33-47), in order to provide for the ability to use anonymous groups to analyze disparate data sets via the use of either individual to segment or segment to segment matching using data modeling or querying as taught by Reference Fenton (see at least in [0005]) so that the process of determining of at least one calibration parameter for detecting federated learning environment-level outliers based at least in part on the one or more detected federated learning environment-level outliers, and outputting the at least one determined calibration parameter to at least a portion of the multiple client systems within the federated learning environment in Reference Sharma (see at least in col. 1, lines 40-47) can be made more efficient and effective.
Further, the claimed invention is merely a combination of old elements in a similar learning customer behavior using federated learning models field of endeavor, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that, given the existing technical ability to combine the elements as evidenced by Reference Sharma in view of Reference Fenton, the results of the combination were predictable (MPEP 2143 A).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
NPL Reference:
P. Treleaven, M. Smietanka and H. Pithadia, "Federated Learning: The Pioneering Distributed Machine Learning and Privacy-Preserving Data Technology," in Computer, vol. 55, no. 4, pp. 20-29, April 2022, doi: 10.1109/MC.2021.3052390.
This reference shows federated learning (pioneered by Google) is a new class of machine learning models trained on distributed data sets, and equally important, a key privacy-preserving data technology. The contribution of this article is to place it in perspective to other data science technologies.
Foreign Reference:
(CN 115222061 A) Cao K. This reference shows the method involves obtaining a second sample data set through a client. The second and the first sample data sets are respectively used for different learning tasks of a first federal learning model. Multiple sample data are extracted from the first data set by the client. A local model is trained based on the second data set and multiple auxiliary sample data. The trained local model uploaded to the server through the client is uploaded. The local model uploads by multiple client is received through the server. The first federal training model and the multiple local models are integrated to obtain a second federal training and learning model, where the local model includes a first local model and a second local model.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANCY PRASAD whose telephone number is (571)270-3265. The examiner can normally be reached M-F: 8:00 AM - 4:30 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Patricia Munson can be reached at (571)270-5396. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/N.N.P/Examiner, Art Unit 3624 /PATRICIA H MUNSON/Supervisory Patent Examiner, Art Unit 3624