DETAILED ACTION
This communication is in response to Application No. 18/311,883 filed on May 03, 2023 in which claims 1-17 are presented for examination.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Acknowledgment is made of applicant's claim for foreign priority based on an application filed in Japan on 05/16/2022. Acknowledgment is also made of receipt of papers submitted under 35 U.S.C. 119(a)-(d), which papers have been placed of record in the file.
Information Disclosure Statement
The information disclosure statement submitted on 05/03/2023 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement was considered by the examiner.
Specification
The contents of the specification are sufficient for examination purposes.
Claim Objections
Claims 1-15 are objected to because of the following informalities:
“causing one or more processors to include: . . .” (Claim 1, ln. 2) is semantically incorrect. The claim should be amended to clearly recite the relationship between the processors and the subsequent steps of the claimed method. For example, the claim could be amended to recite “causing one or more processors to perform a set of actions, including: . . .” or “causing one or more processors to perform: . . .” (objection applies equally to dependent claims 2-15).
“the one or more processors to include . . .” (Claim 8-9 and 12-15, ln. 2) is objected to for substantially the same reasoning as articulated in regard to the rejection of Claim 1 above. As such, these claims should be similarly amended.
Appropriate correction is required.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claims 3-10 are rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
Regarding Claim 3, the claim recites “a plurality of the first facilities” (ln. 2-3). There is insufficient antecedent basis for this limitation in the claim. Specifically, Claim 1, which Claim 3 depends upon, recites only a single “first facility” (Claim 1, ln. 4). As a result, it is not clear which “first facilities” are being referenced with the recitation of “the”. As a result, the claim is rejected. The claim should be amended to remove “the” or to establish proper antecedent basis for “the first facilities” in either Claim 1 or Claim 3.
Regarding Claim 4, the claim recites “the characteristic of the second facility” (ln. 3-4). There is insufficient antecedent basis for this limitation in the claim. Specifically, Claim 1, which Claim 3 depends upon, recites “characteristics of a plurality of second facilities” (Claim 1, ln. 3). As a result, it is not clear which “second facility” is being referenced with the recitation of “the”. For example, it is not clear whether the limitation applies to all of the “second facilities” individually, or only a specific “second facility”. Additionally, it is not clear which “characteristic” is being referenced with the recitation of “the”. For example, it is not clear whether the limitation applies to all of the “characteristic” individually, or only a specific “characteristic”. As a result, the claim is rejected. The claim should be amended to recite “one of the characteristics of each of the second facilities” or otherwise modified to correct this issue.
Regarding Claims 5-8, the claims recite “the characteristic of the second facility” (Claim 5, ln. 4; Claim 6, ln. 3-4; Claim 7, ln. 3; Claim 8, ln. 3), which is indefinite for substantially the same reasoning as discussed in regard to the rejection of Claim 4. As a result, the claims are similarly rejected and should be amended in a similar manner.
Regarding Claim 9, the claim is rejected because it is dependent on a rejected claim.
Regarding Claim 10, the claim recites “the characteristic of the second facility” (Claim 10, ln. 4), which is indefinite for substantially the same reasoning as discussed in regard to the rejection of Claim 4. As a result, the claim is similarly rejected and should be amended in a similar manner.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to abstract ideas without significantly more.
Regarding Claim 1:
Step 1: Claim 1 is a process claim. Therefore, claims 1-15 are directed to a statutory category of eligible subject matter.
Step 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas. Here, steps of the claimed subject matter are mental processes. Specifically, the claim recites
“An information processing method comprising . . . representing characteristics of a plurality of second facilities . . . such that prediction performance at each of the second facilities is improved according to the characteristics of each of the second facilities” (mental process – amounts to exercising judgement to form an opinion on how to represent known or observed information, with reference to a goal of improved prediction performance, which may be aided by pen and paper) and
“predicts a behavior of a user on an item” (mental process – amounts to exercising judgment to form an opinion on how a known or observed user will interact with a known or observed item, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“causing one or more processors to include . . . and training a plurality of the models” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea);
“facilities different from a first facility where a dataset, which is used for a training of a model that . . . is collected” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea); and
“is collected” (collection of data amounts to mere data gathering, which is insignificant extra-solution activity because it is incidental to the claimed subject matter).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“causing one or more processors to include . . . and training a plurality of the models” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept);
“facilities different from a first facility where a dataset, which is used for a training of a model that” (merely reciting a particular technological environment or field of use does not provide an inventive concept); and
“is collected” (mere data gathering is well‐understood, routine, and conventional activity, see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93); which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration).
For the reasons above, Claim 1 is rejected as being directed to an abstract idea without significantly more. This rejection applies equally to dependent claims 2-15. The additional limitations of the dependent claims are addressed below.
Regarding Claim 2:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 2 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites:
“by using data included in the dataset based on the characteristics of each of the second facilities” (mental process – amounts to exercising judgement to form an opinion on what data should be used as part of a dataset, with reference to known or observed characteristics of entities, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“wherein the one or more processors are configured to train the plurality of models” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea) and
“corresponding to each of the plurality of second facilities” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“wherein the one or more processors are configured to train the plurality of models” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) and
“corresponding to each of the plurality of second facilities” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
Accordingly, Claim 2 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 3:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 3 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites:
“represent the characteristics of each of the second facilities” (mental process – amounts to exercising judgement to form an opinion on representations, with reference to known or observed characteristics of entities, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“wherein a plurality of the datasets, which are collected . . . are prepared” (amounts to mere data gathering, which is insignificant extra-solution activity because it is incidental to the claimed subject matter);
“and the one or more processors are configured to . . . and train the plurality of models” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea); and
“from each of a plurality of the first facilities . . . different from each of the first facilities . . . by using data included in each of the datasets based on the characteristics of each of the second facilities” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“wherein a plurality of the datasets, which are collected . . . are prepared” (mere data gathering is well‐understood, routine, and conventional activity, see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93); which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration);
“and the one or more processors are configured to . . . and train the plurality of models” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept); and
“from each of a plurality of the first facilities . . . different from each of the first facilities . . . by using data included in each of the datasets based on the characteristics of each of the second facilities” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
Accordingly, Claim 3 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 4:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 4 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites:
“represent a difference in a probability distribution of explanatory variables . . . as a representation of the characteristic” (mental process – amounts to exercising judgement to evaluate distributions of known or observed data, with reference to known or observed characteristics of entities, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“wherein the one or more processors are configured to . . . ” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea) and
“in the first facility and the second facility . . . of the second facility” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“wherein the one or more processors are configured to . . .” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) and
“in the first facility and the second facility . . . of the second facility” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
Accordingly, Claim 4 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 5:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 5 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites:
“represent a difference in a conditional probability between explanatory variables and response variables . . . as a representation of the characteristic” (mental process – amounts to exercising to evaluate known or observed characteristics to form an opinion on differences of conditional probability, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“wherein the one or more processors are configured to” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea) and
“in the first facility and the second facility . . . of the second facility” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“wherein the one or more processors are configured to” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) and
“in the first facility and the second facility . . . of the second facility” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
Accordingly, Claim 5 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 6:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 6 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites:
“sampling data . . . from the dataset, according to the characteristic of the second facility” (mental process – amounts to exercising judgement to form an opinion on what data should be used as part of a dataset, with reference to known or observed characteristics of entities, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“wherein the one or more processors are configured to perform the training by” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea) and
“which is used for the training” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“wherein the one or more processors are configured to perform the training by” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) and
“which is used for the training” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
Accordingly, Claim 6 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 7:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 7 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites:
“weighting data included in the dataset, according to the characteristic of the second facility” (mental process – amounts to exercising judgement to form an opinion on the importance of data as part of a dataset, with reference to known or observed characteristics of entities, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“wherein the one or more processors are configured to perform the training by” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“wherein the one or more processors are configured to perform the training by” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept).
Accordingly, Claim 7 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 8:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 8 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites:
“selecting a feature amount . . . according to the characteristic of the second facility” (mental process – amounts to exercising judgement to form an opinion on what data attributes should be used as part of a dataset, with reference to known or observed characteristics of entities, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“further comprising: causing the one or more processors to include” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea) and
“used in the model” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“further comprising: causing the one or more processors to include” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) and
“used in the model” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
Accordingly, Claim 8 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 9:
Step 2A Prong 1: See the rejection of Claim 8 above, which Claim 9 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites:
“deleting a part of a cross feature amount, which is represented by a combination of explanatory variables, from the feature amount” (mental process – amounts to exercising judgement to form an opinion on data attributes that should be excluded from use, with reference to known or observed variables, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“further comprising: causing the one or more processors to include performing the training by” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea) and
“of the model” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“further comprising: causing the one or more processors to include performing the training by” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) and
“of the model” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
Accordingly, Claim 9 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 10:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 10 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites:
“and the characteristic of the second facility . . . is a characteristic of a hypothetical facility assumed within a range of a characteristic of a facility capable of being an introduction destination facility” (mental process – amounts to exercising judgement to form an opinion on a hypothetical entity that is assumed to have a characteristic with a specific range, with reference to known or observed characteristics of entities, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“used in a suggestion system that . . . which is represented by the one or more processors . . . of the suggestion system” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea);
“wherein the model is a prediction model” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea); and
“suggests an item to a user” (amounts to presenting data, which is insignificant extra-solution activity because it is incidental to the claimed subject matter).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“used in a suggestion system that . . . which is represented by the one or more processors . . . of the suggestion system” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept);
“wherein the model is a prediction model” (merely reciting a particular technological environment or field of use does not provide an inventive concept); and
“suggests an item to a user” (presenting data is well‐understood, routine, and conventional activity, see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93; which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration).
Accordingly, Claim 10 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 11:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 11 depends on.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“wherein the dataset includes a behavior history of a plurality of users on a plurality of items in the first facility” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“wherein the dataset includes a behavior history of a plurality of users on a plurality of items in the first facility” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
Accordingly, Claim 11 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 12:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 12 depends on.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“further comprising: causing the one or more processors to include” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea) and
“storing a set of a plurality of candidate models, which include the plurality of models generated by performing the training, in a storage device” (amounts to storing data in memory, which is insignificant extra-solution activity because it is incidental to the claimed subject matter).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“further comprising: causing the one or more processors to include” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) and
“storing a set of a plurality of candidate models, which include the plurality of models generated by performing the training, in a storage device” (storing data in memory is well‐understood, routine, and conventional activity, see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration).
Accordingly, Claim 12 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 13:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 13 depends on.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“comprising: causing the one or more processors to include” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea);
“storing a set of a plurality of candidate models . . . in a storage device” (amounts to storing data in memory, which is insignificant extra-solution activity because it is incidental to the claimed subject matter); and
“which include a first model that is trained to improve prediction performance at the first facility by using data included in the dataset and a plurality of second models that are the plurality of models trained based on the characteristics of each of the plurality of second facilities” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“comprising: causing the one or more processors to include” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept);
“storing a set of a plurality of candidate models . . . in a storage device” (storing data in memory is well‐understood, routine, and conventional activity, see Versata Dev. Group, Inc. v. SAP Am., Inc., 793 F.3d 1306, 1334, 115 USPQ2d 1681, 1701 (Fed. Cir. 2015); OIP Techs., 788 F.3d at 1363, 115 USPQ2d at 1092-93; which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration); and
“which include a first model that is trained to improve prediction performance at the first facility by using data included in the dataset and a plurality of second models that are the plurality of models trained based on the characteristics of each of the plurality of second facilities” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
Accordingly, Claim 13 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 14:
Step 2A Prong 1: See the rejection of Claim 13 above, which Claim 14 depends on.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“causing the one or more processors to include training the first model by using the data included in the dataset” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“causing the one or more processors to include training the first model by using the data included in the dataset” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept).
Accordingly, Claim 14 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 15:
Step 2A Prong 1: See the rejection of Claim 1 above, which Claim 15 depends on. Here, the claim recites additional elements that are mental processes. Specifically, the claim recites:
“evaluating performance of a plurality of candidate models . . . and extracting a model suitable . . . based on an evaluation result” (mental process – amounts to exercising judgement to evaluate model performance, with reference to known or observed values, in order to form an opinion on a suitable model, which may be aided by pen and paper).
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“further comprising: causing the one or more processors to include” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea) and
“each of a plurality of candidate models including the plurality of models by using data collected at a third facility that is different from the first facility . . . for the third facility from among the plurality of candidate models” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“further comprising: causing the one or more processors to include” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept) and
“each of a plurality of candidate models including the plurality of models by using data collected at a third facility that is different from the first facility . . . for the third facility from among the plurality of candidate models” (merely reciting a particular technological environment or field of use does not provide an inventive concept).
Accordingly, Claim 15 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 16:
Step 1: Claim 16 is a machine claim. Therefore, it is directed to a statutory category of eligible subject matter.
Step 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas. Here, the claim recites limitations that are substantially the same as the limitations of Claim 1. As a result, and as elaborated above, these limitations are abstract ideas because they are mental processes.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“An information processing apparatus comprising: one or more processors; and one or more memories in which an instruction executed by the one or more processors is stored, wherein the one or more processors are configured to. . . and train a plurality of the models” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea);
“facilities different from a first facility where a dataset, which is used for a training of a model that” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea); and
“is collected” (collection of data amounts to mere data gathering, which is insignificant extra-solution activity because it is incidental to the claimed subject matter).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“An information processing apparatus comprising: one or more processors; and one or more memories in which an instruction executed by the one or more processors is stored, wherein the one or more processors are configured to. . . and train a plurality of the models” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept);
“facilities different from a first facility where a dataset, which is used for a training of a model that” (merely reciting a particular technological environment or field of use does not provide an inventive concept); and
“is collected” (mere data gathering is well‐understood, routine, and conventional activity, see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93); which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration).
For the reasons above, Claim 16 is rejected as being directed to an abstract idea without significantly more.
Regarding Claim 17:
Step 1: Claim 17 is a machine claim. Therefore, it is directed to a statutory category of eligible subject matter.
Step 2A Prong 1: If a claim limitation, under its broadest reasonable interpretation, covers performance of the limitation in the mind but for the recitation of generic computer components, then it falls within the "Mental Processes" grouping of abstract ideas. Here, the claim recites limitations that are substantially the same as the limitations of Claim 1. As a result, and as elaborated above, these limitations are abstract ideas because they are mental processes.
Step 2A Prong 2: This judicial exception is not integrated into a practical application.
The claim recites the additional elements:
“A non-transitory, computer-readable tangible recording medium which records thereon a program for causing, when read by a computer, the computer to realize: . . . and a function of training a plurality of the models” (amounts to mere instructions to apply the judicial exception on generic and unspecialized computer components, which do not impose any meaningful limits on practicing the abstract idea);
“facilities different from a facility where a dataset, which is used for a training of a model that” (amounts to merely reciting a particular technological environment or field of use, which does not impose any meaningful limits on practicing the abstract idea); and
“is collected” (collection of data amounts to mere data gathering, which is insignificant extra-solution activity because it is incidental to the claimed subject matter).
Step 2B: The claim does not include additional elements considered individually and in combination that are sufficient to amount to significantly more than the judicial exception.
The claim recites the additional element:
“A non-transitory, computer-readable tangible recording medium which records thereon a program for causing, when read by a computer, the computer to realize: . . . and a function of training a plurality of the models” (mere instructions to apply the exception using generic computer components cannot provide an inventive concept);
“facilities different from a facility where a dataset, which is used for a training of a model that” (merely reciting a particular technological environment or field of use does not provide an inventive concept); and
“is collected” (mere data gathering is well‐understood, routine, and conventional activity, see OIP Techs., 788 F.3d at 1362-63, 115 USPQ2d at 1092-93); which is recited here with a high level of generality, and remains insignificant extra-solution activity even upon reconsideration).
For the reasons above, Claim 17 is rejected as being directed to an abstract idea without significantly more.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-3, 7-8, and 12-17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Wouhaybi et al. (hereinafter Wouhaybi) (Pat. Pub. No. US 2022/0222583 A1).
Regarding Claim 1, Wouhaybi teaches an information processing method comprising: causing one or more processors to include (Abstract, “Methods, apparatus, systems, and articles of manufacture are disclosed for clustered federated learning. An example apparatus includes at least one memory, instructions, and processor circuitry to at least one of instantiate or execute the instructions”, where “methods” for “federated learning” comprise an information processing method, see generally Para. [0051], “Artificial intelligence (AI), including machine learning (ML), deep learning (DL), and/or other artificial machine-driven logic, enables machines (e.g., computers, logic circuits, etc.) to use a model to process input data to generate an output”; see also Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”):
representing characteristics of a plurality of second facilities different from a first facility where a dataset, which is used for a training of a model that predicts a behavior of a user on an item, is collected (Para. [0028], “Examples disclosed herein include clustered federated learning using context data . . . include[ing] . . . enabling multiple nodes to train a deep learning network based on data (e.g., measured data, observed data, live data, sensor data, etc.) observed by the nodes . . . and/or updated deep learning network weights to a central location” and Para. [0032], “context data can include . . . a physical location of the node . . . environmental data associated with the node . . . the central location can cluster nodes of an environment that are similar to each other with respect to their context data”, where characteristics are represented, “context data”, for a plurality of “environment[s]”, and where a dataset, which is used for training a model, is collected at each environment, “enabling multiple nodes to train a deep learning network based on data (e.g., measured data, observed data, live data, sensor data, etc.) observed by the nodes”; see also Fig. 1 and Para. [0034], “In some examples, the environments 124, 126 are representative of physical environments, such as commercial, industrial, public, and/or residential environments. For example, one(s) of the environments 124, 126 can be a commercial environment such as a bar and/or nightclub, a hospital, a movie theatre, a restaurant, a retail store, etc., and/or any combination(s) thereof. In some examples, one(s) of the environments 124, 126 can be an industrial environment . . . [or] a public environment”, where the “environments” are facilities, such as “retail store” facilities or “public” facilities; see also Fig. 4, where clusters, such as “CLUSTER 1” and “CLUSTER 2”, can be considered a first facility, and the remaining clusters, “CLUSTER 3” through “CLUSTER 4”, can be a plurality of second facilities, which are different from the first facility because, as discussed above, the nodes within the clusters, “404” – “410”, are clustered into “environment[s] that are similar”, so the presence of distinct clusters demonstrates differences between “environment” facilities; see also Fig. 1 and Para. [0046], “The first node 108 can provide the sensor data as model input(s) to the first ML model 138A. The first ML model 138A can execute inference operations on the sensor data to produce and/or otherwise output model outputs, which can include a decision, a determination, a recommendation, etc., to carry out an action, operation, etc., in connection with . . . the first environment 124”, where the model produces an output, “execute inference operations on the sensor data to produce and/or otherwise output model outputs”, which can include a prediction, see Para. [0062], “an event not accurately predicted by the first ML model 138A”, and the prediction can be of a user behavior on an item, see Para. [0073], “In some examples, the model outputs can be a detection of an object (e.g., a person, an animal, a vehicle, etc.) in connection with an autonomous vehicle environment (e.g., a road, a highway, etc.)”, where the “person” is within the broadest reasonable interpretation of a user, their presence is a behavior, and either the “vehicle” or the “road” itself is within the broadest reasonable interpretation of an item); and
training a plurality of the models such that prediction performance at each of the second facilities is improved according to the characteristics of each of the second facilities (Para. [0032], “Increased accuracy is achieved, for example, by using new or updated weight values determined (e.g., iteratively determined, recursively determined, etc.) . . . Complexity is reduced, for example, by enabling a node to execute and/or train (e.g., retrain) a portion of a larger AI/ML model instead of an entirety of the larger AI/ML model. By executing and/or training (e.g., retraining) a portion of the larger AI/ML model, less resources (e.g., compute, storage, network, security, acceleration, etc., resources) may be utilized to effectuate the executing and/or the training (e.g., the retraining) . . . the central location can cluster nodes of an environment that are similar to each other with respect to their context data. Advantageously, at least some example federated learning techniques disclosed herein can include providing a subset or a portion of an AI/ML model to be deployed on resource constrained nodes to increase AI/ML learning/inference capabilities of the resource constrained nodes while minimizing and/or otherwise reducing the hardware, software, and/or firmware utilization of the resource constrained nodes”, where a plurality of the models at each of the second facilities are trained according to the characteristics of each of the facilities, “providing a subset or a portion of an AI/ML model” to each “node” for “retrain[ing]” based on the “cluster” “with respect to their context data”, which improves prediction “accuracy” and “[c]omplexity”).
Regarding Claim 2, Wouhaybi teaches the information processing method according to claim 1, wherein the one or more processors are configured to (Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”)
train the plurality of models corresponding to each of the plurality of second facilities by using data included in the dataset based on the characteristics of each of the second facilities (Para. [0032], “By executing and/or training (e.g., retraining) a portion of the larger AI/ML model, less resources (e.g., compute, storage, network, security, acceleration, etc., resources) may be utilized to effectuate the executing and/or the training (e.g., the retraining) . . . the central location can cluster nodes of an environment that are similar to each other with respect to their context data. Advantageously, at least some example federated learning techniques disclosed herein can include providing a subset or a portion of an AI/ML model to be deployed on resource constrained nodes to increase AI/ML learning/inference capabilities of the resource constrained nodes while minimizing and/or otherwise reducing the hardware, software, and/or firmware utilization of the resource constrained nodes”, where a plurality of the models at each of the second facilities are trained according to the characteristics of each of the facilities, “providing a subset or a portion of an AI/ML model” to each “node” for “retrain[ing]” based on the “cluster” “with respect to their context data”, using data included in the dataset, see Para. [0028], “nodes to train a deep learning network based on data (e.g., measured data, observed data, live data, sensor data, etc.) observed by the nodes”).
Regarding Claim 3, Wouhaybi teaches the information processing method according to claim 1, wherein a plurality of the datasets, which are collected from each of a plurality of the first facilities, are prepared (Para. [0028], “Examples disclosed herein include clustered federated learning using context data . . . include[ing] . . . enabling multiple nodes to train a deep learning network based on data (e.g., measured data, observed data, live data, sensor data, etc.) observed by the nodes . . . and/or updated deep learning network weights to a central location” and Para. [0032], “context data can include . . . a physical location of the node . . . environmental data associated with the node . . . the central location can cluster nodes of an environment that are similar to each other with respect to their context data”, where the plurality of datasets “data”, includes datasets collected from a plurality of “nodes”; see also Fig. 4, where “CLUSTER 1” and “CLUSTER 2” could be considered the plurality of first facilities, whereas “CLUSTER 3” and “CLUSTER 4” could be considered the plurality of second facilities; see also Para. [0059, “In some examples, input data undergoes pre-processing before being used as an input to the ML model”, where the “input data” is prepared, “pre-process[ed]”),
and the one or more processors are configured to (Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example):
represent the characteristics of each of the second facilities different from each of the first facilities (Para. [0028], “Examples disclosed herein include clustered federated learning using context data . . . include[ing] . . . enabling multiple nodes to train a deep learning network based on data (e.g., measured data, observed data, live data, sensor data, etc.) observed by the nodes . . . and/or updated deep learning network weights to a central location” and Para. [0032], “context data can include . . . a physical location of the node . . . environmental data associated with the node . . . the central location can cluster nodes of an environment that are similar to each other with respect to their context data”, where characteristics are represented, “context data”, for a plurality of “environment” facilities; see also Fig. 4, where clusters, such as “CLUSTER 1” and “CLUSTER 2”, can be considered a first facility, and the remaining clusters, “CLUSTER 3” through “CLUSTER 4”, can be a plurality of second facilities, which are different from the first facility because, as discussed above, the nodes within the clusters, “404” – “410”, are clustered into “environment[s] that are similar”, so the presence of distinct clusters demonstrates differences between “environment” facilities); and
train the plurality of models by using data included in each of the datasets based on the characteristics of each of the second facilities (Fig. 1 and Para. [0048] – [0049], “In example operation, the first node 108 can train (e.g., retrain) the first ML model 138A . . . [to generate] new, revised, or updated weights . . . the first node 108 can provide the new/revised/updated weights and/or the first node context data 136A to the model handler 102 of the server 128 . . . the model handler 102 of the server 128 can identify portion(s) of the ML models 104 of which to retrain using the new/revised/updated weights. . . .the model handler 102 can distribute and/or otherwise deploy the first portion, and/or, more generally, the first one of the ML models 104, to one(s) of the nodes 108, 110, 112, 114, 116, 118, 120, 122 that correspond(s) to the first node context data 136A”, where the “train[ing]” occurs by weighting data included in the dataset, “new, revised, or updated weights” which are “distribut[ed]” for training at other facilities, such as “ENVIRONMENT B 126” with nodes “116” through “122”, based on the characteristics of the second facilities, “context data”).
Regarding Claim 7, Wouhaybi teaches the information processing method according to claim 1, wherein the one or more processors are configured to perform (Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”)
the training by weighting data included in the dataset, according to the characteristic of the second facility (Fig. 1 and Para. [0048] – [0049], “In example operation, the first node 108 can train (e.g., retrain) the first ML model 138A . . . [to generate] new, revised, or updated weights . . . the first node 108 can provide the new/revised/updated weights and/or the first node context data 136A to the model handler 102 of the server 128 . . . the model handler 102 of the server 128 can identify portion(s) of the ML models 104 of which to retrain using the new/revised/updated weights. . . .the model handler 102 can distribute and/or otherwise deploy the first portion, and/or, more generally, the first one of the ML models 104, to one(s) of the nodes 108, 110, 112, 114, 116, 118, 120, 122 that correspond(s) to the first node context data 136A”, where the “train[ing]” occurs by weighting data included in the dataset, “new, revised, or updated weights” which are “distribut[ed]” for training at other facilities, such as “ENVIRONMENT B 126” with nodes “116” through “122”, based on the characteristics of the second facility, “context data”).
Regarding Claim 8, Wouhaybi teaches the information processing method according to claim 1, further comprising: causing the one or more processors to include (Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”)
selecting a feature amount used in the model, according to the characteristic of the second facility (Fig. 1 and Para. [0048] – [0049], “In example operation, the first node 108 can train (e.g., retrain) the first ML model 138A . . . [to generate] new, revised, or updated weights . . . the first node 108 can provide the new/revised/updated weights and/or the first node context data 136A to the model handler 102 of the server 128 . . . the model handler 102 of the server 128 can identify portion(s) of the ML models 104 of which to retrain using the new/revised/updated weights. . . .the model handler 102 can distribute and/or otherwise deploy the first portion, and/or, more generally, the first one of the ML models 104, to one(s) of the nodes 108, 110, 112, 114, 116, 118, 120, 122 that correspond(s) to the first node context data 136A . . . For example, the model handler 102 of the server 128 can identify that the second node 110 corresponds to the first node context data 136A based on a determination that the first node context data 136A and the second node context data 136B include the same (or substantially similar) . . . location”, where the “train[ing]” occurs by weighting data included in the dataset, “new, revised, or updated weights”, which include amounts of features that will be used by a model for further analysis, which are selected to be used by models at the second facility, such as “138E” through “138H” at “ENVIRONMENT B 126”, based on the characteristics of the second facility, “location” “context data”).
Regarding Claim 12, Wouhaybi teaches the information processing method according to claim 1, further comprising: causing the one or more processors to (Fig. 2; Para. [0050], “The model handler circuitry 200 of FIG. 2 may be instantiated by processor circuitry such as a central processing unit executing instructions”)
include storing a set of a plurality of candidate models, which include the plurality of models generated by performing the training, in a storage device (Fig. 2 and Para. [0058], “The model handler circuitry 200 may store the ML model 266 and the ML executable 268 in an example datastore 260”, where “the ML model 266” is stored in a storage device, “datastore 260”; Fig. 1 and Para. [0051], “the ML model 266 can correspond to one(s) of the ML models 104, the first ML model 138A, the second ML model 138B, the third ML model 138C, the fourth ML model 138D, the fifth ML model 138E, the sixth ML model 138F, the seventh ML model 138G, and/or the eighth ML model 138H of FIG. 1”, where “the ML model 266” includes a set of a plurality of models, “the ML models 104” and models “138A” to “138H”, which are generated by performing the training, see Para. [0054], “Different types of training may be performed . . . for the ML model 266 that reduce model error”; see also Para. [0045], “the model handler 102 of the server 128 can instantiate one(s) of the ML models 104 based on the context data 106”, where “the ML models 104” are candidate models because a candidate is selected from the set based on “the context data”, and therefore, so models “138A” to “138H” because they are part of “the ML models 104”, see Para. [0041], “In some examples, the ML models 104 include one(s) of the ML models 138A, 138B, 138C, 138D, 138E, 138F, 138G, 138H”).
Regarding Claim 13, Wouhaybi teaches the information processing method according to claim 1, further comprising: causing the one or more processors to (Fig. 2; Para. [0050], “The model handler circuitry 200 of FIG. 2 may be instantiated by processor circuitry such as a central processing unit executing instructions”)
include storing a set of a plurality of candidate models, which include a first model that is trained to improve prediction performance at the first facility by using data included in the dataset and a plurality of second models that are the plurality of models trained based on the characteristics of each of the plurality of second facilities, in a storage device (Fig. 2 and Para. [0058], “The model handler circuitry 200 may store the ML model 266 and the ML executable 268 in an example datastore 260”, where “the ML model 266” is stored in a storage device, “datastore 260”; Fig. 1 and Para. [0051], “the ML model 266 can correspond to one(s) of the ML models 104, the first ML model 138A, the second ML model 138B, the third ML model 138C, the fourth ML model 138D, the fifth ML model 138E, the sixth ML model 138F, the seventh ML model 138G, and/or the eighth ML model 138H of FIG. 1”, where “the ML model 266” includes a set of a plurality of models, “the ML models 104” and models “138A” to “138H”, which are generated by performing the training, see Para. [0054], “Different types of training may be performed . . . for the ML model 266 that reduce model error”, see also Fig. 4, where the models at the nodes “404” of “CLUSTER 1” can be considered to comprise the first model at the first facility, whereas the models at the nodes of “CLUSTER 2” through “CLUSTER 4” can be considered the second models at the second facilities; see also Para. [0032], “Increased accuracy is achieved, for example, by using new or updated weight values determined (e.g., iteratively determined, recursively determined, etc.) . . . Complexity is reduced, for example, by enabling a node to execute and/or train (e.g., retrain) a portion of a larger AI/ML model instead of an entirety of the larger AI/ML model. By executing and/or training (e.g., retraining) a portion of the larger AI/ML model, less resources (e.g., compute, storage, network, security, acceleration, etc., resources) may be utilized to effectuate the executing and/or the training (e.g., the retraining) . . . the central location can cluster nodes of an environment that are similar to each other with respect to their context data. Advantageously, at least some example federated learning techniques disclosed herein can include providing a subset or a portion of an AI/ML model to be deployed on resource constrained nodes to increase AI/ML learning/inference capabilities of the resource constrained nodes while minimizing and/or otherwise reducing the hardware, software, and/or firmware utilization of the resource constrained nodes”, where a plurality of the models at each of the second facilities are trained according to the characteristics of each of the facilities, “providing a subset or a portion of an AI/ML model” to each “node” for “retrain[ing]” based on the “cluster” “with respect to their context data”, which improves prediction “accuracy” and “[c]omplexity”; see also Para. [0028], “nodes to train a deep learning network based on data (e.g., measured data, observed data, live data, sensor data, etc.) observed by the nodes”, where data from the dataset is used for training).
Regarding Claim 14, Wouhaybi teaches the information processing method according to claim 13, further comprising: causing the one or more processors to include (Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”)
training the first model by using the data included in the dataset (Para. [0048], “In example operation, the first node 108 can train (e.g., retrain) the first ML model 138A based on the label(s). For example, the first node 108 can invoke the first ML model 138A to carry out retraining operations to determine, generate, and/or otherwise output new, revised, or updated weights (e.g., ML weights, neural network weights, etc.) of the first ML model 138A . . . Advantageously, the first node 108 can retrain the first ML model 138A to identify similar defects in future operations of the first environment 124 and thereby increase an accuracy of the first ML model 138A”, where the “labels” are data included in the dataset, see Para. [0027], “if a first local node in an industrial environment is communicatively coupled to a sensor such as a camera, and the first local node generates labels”).
Regarding Claim 15, Wouhaybi teaches the information processing method according to claim 1, further comprising: causing the one or more processors to include (Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”)
evaluating performance of each of a plurality of candidate models including the plurality of models (Para. [0057], “the model handler circuitry 200 sub-divides the training data 262 into a first portion of data for training the ML model 266, and a second portion of data for validating the ML model 266”, where “validating the ML model 266” is evaluating its performance; see also Para. [0051], “the ML model 266 can correspond to one(s) of the ML models 104, the first ML model 138A, the second ML model 138B, the third ML model 138C, the fourth ML model 138D, the fifth ML model 138E, the sixth ML model 138F, the seventh ML model 138G, and/or the eighth ML model 138H of FIG. 1”, where “the ML model 266” includes a set of a plurality of models, “the ML models 104” and models “138A” to “138H”, which are generated by performing the training, see Para. [0054], “Different types of training may be performed . . . for the ML model 266 that reduce model error”; see also Para. [0045], “the model handler 102 of the server 128 can instantiate one(s) of the ML models 104 based on the context data 106”, where “the ML models 104” are candidate models because a candidate is selected from the set based on “the context data”, and therefore, so models “138A” to “138H” because they are part of “the ML models 104”, see Para. [0041], “In some examples, the ML models 104 include one(s) of the ML models 138A, 138B, 138C, 138D, 138E, 138F, 138G, 138H”)
by using data collected at a third facility that is different from the first facility (Fig. 2 and Para. [0057], “the model handler circuitry 200 utilizes the training data 262 that originates from locally generated data, such as labels, sensor data, etc”, where “the training data 626” is “locally generated data” from the facilities; see also Fig. 4, where the first facility can be visualized as “CLUSTER 1”, the second facilities can be visualized as “CLUSTER 3” and “CLUSTER 4” and a third facility can be visualized as “CLUSTER 2”) and
extracting a model suitable for the third facility from among the plurality of candidate models based on an evaluation result (Para. [0058] – [0059], “Once training is complete, the model handler circuitry 200 may deploy the ML model 266 for use as an executable construct that processes an input and provides an output based on the network of nodes and connections defined in the ML model 266 . . . In some examples, the model handler circuitry 200 may invoke the interface circuitry 210 to transmit the ML model 266, the ML executable 268, etc., to one(s) of the nodes 108, 110, 112, 114, 116, 118, 120, 122 . . . the deployed ML model 266, the ML executable 268, etc., may be operated in an inference phase to process data”, where “the ML model 266” is extracted, “transmit[ted]”, to “the nodes”, including the nodes “402” of the third facility “406”, which must be suitable for the third facility in order to be “deployed [there to] . . . operated in an inference phase to process data”, see Fig. 4, and where the extracted model is based on an evaluation result because the “the network of nodes and connections defined in the ML model” is based on the “training”, which as discussed above includes evaluation).
Regarding Claim 16, Wouhaybi teaches an information processing apparatus comprising: one or more processors; and one or more memories in which an instruction executed by the one or more processors is stored, wherein the one or more processors are configured to: . . . (Abstract, “Methods, apparatus, systems, and articles of manufacture are disclosed for clustered federated learning. An example apparatus includes at least one memory, instructions, and processor circuitry to at least one of instantiate or execute the instructions”, where an “apparatus” for “federated learning” is an information processing apparatus, which comprises one or more processors, “processor circuitry”, and one or more memories, “at least one memory”, that stores “instruction” executed by the “processor circuity”, see Para. [0118], “The machine readable instructions may be one or more executable programs or portion(s) of an executable program for execution by processor circuitry . . . [t]he program may be embodied in software stored on one or more non-transitory computer readable storage media such . . . memory”; see generally Para. [0051], “Artificial intelligence (AI), including machine learning (ML), deep learning (DL), and/or other artificial machine-driven logic, enables machines (e.g., computers, logic circuits, etc.) to use a model to process input data to generate an output”).
The remaining limitations are substantially the same as the limitations of Claim 1, therefore it is rejected under the same rationale.
Regarding Claim 17, Wouhaybi teaches a non-transitory, computer-readable tangible recording medium which records thereon a program for causing, when read by a computer, the computer to realize: . . . (Para. [0199], “a non-transitory computer readable storage medium comprising instructions that, when executed, cause processor circuitry to . . .”; see also Para. [0122], “As used herein, the terms non-transitory computer readable medium, non-transitory computer readable storage medium, non-transitory machine readable medium, and non-transitory machine readable storage medium are expressly defined to include any type of computer and/or machine readable storage device and/or storage disk and to exclude propagating signals and to exclude transmission media”).
The remaining limitations are substantially the same as the limitations of Claim 1, therefore it is rejected under the same rationale.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 4 and 6 are rejected under 35 U.S.C. 103 as being unpatentable over Wouhaybi in view of Maxx (“Three Common Ways for comparing Two Dataset Distributions”).
Regarding Claim 4, Wouhaybi teaches the information processing method according to claim 1, wherein the one or more processors are configured to (Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”)
represent a difference . . . in the first facility and the second facility, as a representation of the characteristic of the second facility (Para. [0032], “the central location can cluster nodes of an environment that are similar to each other with respect to their context data”; Fig. 1; and FIG. 4, where a difference between the first facility, such as “ENVIRONMENT A 124”, and the second facility, such as “ENVIRONMENT B 126” must be represented in order to “cluster nodes of an environment”, and where the cluster associated with second facility, such as “CLUSTER 2 406”, forms a representation of the characteristics of the second facility, such as its “environmental” characteristics or “location” characteristics, see Para. [0032], “context data can include . . . a physical location of the node . . . environmental data associated with the node . . . the central location can cluster nodes of an environment that are similar to each other with respect to their context data”).
Wouhaybi does not explicitly disclose . . . in a probability distribution of explanatory variables . . . (where probability distributions are not specifically discussed in regard to the difference representation).
However, Maxx teaches . . . [representing differences] in a probability distribution of explanatory variables [between datasets] . . . (Pg. 2, Para. 3, “For a multiple variable dataset, the approach is quite simple: for each variable from the sample you should compare its probability distribution with the probability distribution of the same variable of the population”, where “each variable” is an explanatory variable because it is measured, “compare[d]”).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the representation of differences between facilities as a representation of characteristics of each facility of Wouhaybi with the representation of differences of probability distributions of explanatory variables between datasets of Maxx in order to ensure clustered nodes within a facility are sufficiently similar for efficient model training (compare Wouhaybi, Para. [0026], “Some such federated learning techniques may be sufficient for some applications, such as personal navigation using maps. However, with other applications, such as medical, retail, or industrial applications, some such federated learning techniques may be deficient and omit context . . . For example, some such federated learning techniques do not augment observed data using node information. In some examples, a node may update its local AI/ML model blindly using data from a different node that is observing vastly different behaviors, conditions, events, etc. As a result, the node may perform constant retraining if an environment includes a plurality of nodes with different node information, which can lead to relatively large and/or complex AI/ML models stored at one or more different ones of the nodes” with Maxx, Pg. 1, Para. 1, “Other possible scenario is when you want to compare the distribution between your training and testing sets to confirm the required level of similarity between both of them” and Maxx, Pg. 3-4, Para. 3-1, “for each variable from the sample you should compare its probability distribution with the probability distribution of the same variable of the population . . . This simple statistical framework is very useful for discovering possible problems in your models when testing and training sets have some differences”).
Regarding Claim 6, Wouhaybi teaches the information processing method according to claim 1, wherein the one or more processors are configured to (Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”)
perform the training by . . . from the dataset, according to the characteristic of the second facility (Fig. 1 and Para. [0048] – [0049], “In example operation, the first node 108 can train (e.g., retrain) the first ML model 138A . . . [to generate] new, revised, or updated weights . . . the first node 108 can provide the new/revised/updated weights and/or the first node context data 136A to the model handler 102 of the server 128 . . . the model handler 102 of the server 128 can identify portion(s) of the ML models 104 of which to retrain using the new/revised/updated weights. . . .the model handler 102 can distribute and/or otherwise deploy the first portion, and/or, more generally, the first one of the ML models 104, to one(s) of the nodes 108, 110, 112, 114, 116, 118, 120, 122 that correspond(s) to the first node context data 136A”, where the “train[ing]” occurs by weighting data included in the dataset, “new, revised, or updated weights” which are “distribut[ed]” for training at other facilities, such as “ENVIRONMENT B 126” with nodes “116” through “122”, based on the characteristics of the second facility, “context data”).
Wouhaybi does not explicitly disclose . . . sampling data, which is used for the training . . . (where sampling of data is not explicitly described in relation to the training).
However, Maxx teaches . . . sampling data, which is used for the training . . . (Pg. 1, Para. 1, “A common situation is when you need to sample from a larger”).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the training of model training using a training dataset of Wouhaybi with the sampling of data from a training dataset for use in training of Maxx in order to reduce the time required for model tuning, while ensuring the tuned model works properly when deployed (Maxx, Pg. 1, Para. 1, “A common situation is when you need to sample from a larger dataset in order to reduce the time required for hyper-parameters tuning. In this case, you want the sample follows the same distribution of the original dataset (population) to guarantee the tuned model works properly in production”).
Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Wouhaybi in view of Maxx and Yamada et al. (hereinafter Yamada) (Pat. Pub. No. US 2021/0342521 A1).
Regarding Claim 5, Wouhaybi teaches the information processing method according to claim 1, wherein the one or more processors are configured to (Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”)
represent a difference . . . in the first facility and the second facility, as a representation of the characteristic of the second facility (Para. [0032], “the central location can cluster nodes of an environment that are similar to each other with respect to their context data”; Fig. 1; and FIG. 4, where a difference between the first facility, such as “ENVIRONMENT A 124”, and the second facility, such as “ENVIRONMENT B 126” must be represented in order to “cluster nodes of an environment”, and where the cluster associated with second facility, such as “CLUSTER 2 406”, forms a representation of the characteristics of the second facility, such as its “environmental” characteristics or “location” characteristics, see Para. [0032], “context data can include . . . a physical location of the node . . . environmental data associated with the node . . . the central location can cluster nodes of an environment that are similar to each other with respect to their context data”).
Wouhaybi does not explicitly disclose . . . in a conditional probability between explanatory variables and response variables . . . (where conditional probabilities are not specifically discussed in regard to the difference representation).
However, Maxx teaches . . . [representing differences] in a . . . probability between [datasets] . . . (Pg. 2, Para. 3, “For a multiple variable dataset, the approach is quite simple: for each variable from the sample you should compare its probability distribution with the probability distribution of the same variable of the population”).
The reasons for obviousness were discussed in regard to the rejection of claim 4 above and remain applicable here.
Specifically, before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the representation of differences between facilities as a representation of characteristics of each facility of Wouhaybi with the representation of differences of probabilities between datasets of Maxx in order to ensure clustered nodes within a facility are sufficiently similar for efficient model training (compare Wouhaybi, Para. [0026], “Some such federated learning techniques may be sufficient for some applications, such as personal navigation using maps. However, with other applications, such as medical, retail, or industrial applications, some such federated learning techniques may be deficient and omit context . . . For example, some such federated learning techniques do not augment observed data using node information. In some examples, a node may update its local AI/ML model blindly using data from a different node that is observing vastly different behaviors, conditions, events, etc. As a result, the node may perform constant retraining if an environment includes a plurality of nodes with different node information, which can lead to relatively large and/or complex AI/ML models stored at one or more different ones of the nodes” with Maxx, Pg. 1, Para. 1, “Other possible scenario is when you want to compare the distribution between your training and testing sets to confirm the required level of similarity between both of them” and Maxx, Pg. 3-4, Para. 3-1, “for each variable from the sample you should compare its probability distribution with the probability distribution of the same variable of the population . . . This simple statistical framework is very useful for discovering possible problems in your models when testing and training sets have some differences”).
Additionally, Yamada teaches . . . [identifying] a conditional probability between explanatory variables and response variables [in training data] . . . (Abstract, “An extraction apparatus . . . [to perform operations] on training data . . . [to] generate a list of conditional probabilities”, where a plurality of explanatory and response variables are identified in the “list of conditional probabilities”; Para. [0056], “the conditional probability P(m|X.sub.i)”, where “X.sub.i” is the explanatory variable and “m” is the response variable).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the representation of dataset probability differences between facilities as a representation of characteristics of each facility of Wouhaybi in view of Maxx with the identifying of conditional probabilities between explanatory and response variables in a dataset of Yamada in order to ensure clustered nodes within a facility are sufficiently similar for efficient model training (compare Wouhaybi, Para. [0026], “Some such federated learning techniques may be sufficient for some applications, such as personal navigation using maps. However, with other applications, such as medical, retail, or industrial applications, some such federated learning techniques may be deficient and omit context . . . For example, some such federated learning techniques do not augment observed data using node information. In some examples, a node may update its local AI/ML model blindly using data from a different node that is observing vastly different behaviors, conditions, events, etc. As a result, the node may perform constant retraining if an environment includes a plurality of nodes with different node information, which can lead to relatively large and/or complex AI/ML models stored at one or more different ones of the nodes” with Maxx, Pg. 1, Para. 1, “Other possible scenario is when you want to compare the distribution between your training and testing sets to confirm the required level of similarity between both of them” and Maxx, Pg. 3-4, Para. 3-1, “for each variable from the sample you should compare its probability distribution with the probability distribution of the same variable of the population . . . This simple statistical framework is very useful for discovering possible problems in your models when testing and training sets have some differences”), using a metric that can detect biases within each dataset which may affect the quality of the training data (Yamada, Para. [0054] – [0056], “a case will be described in which the information gain calculation unit 1411 calculates the entropy H(m|X.sub.i) of the word m during the condition X.sub.i . . . the conditional probability P(m|X.sub.i) is indicated”, where the detection of a significant difference between conditional probabilities, “conditional probability P(m|X.sub.i”, may indicate a bias in one dataset that would result in inefficiency if used for more global training).
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Wouhaybi in view of Naidoo et al. (hereinafter Naidoo) (Pat. Pub. No. 2022/0147865 A1).
Regarding Claim 9, Wouhaybi teaches the information processing method according to claim 8, further comprising: causing the one or more processors to include (Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”)
performing the training by . . . [performing operations relating to] the feature amount of the model (Fig. 1 and Para. [0048] – [0049], “In example operation, the first node 108 can train (e.g., retrain) the first ML model 138A . . . [to generate] new, revised, or updated weights . . . the first node 108 can provide the new/revised/updated weights and/or the first node context data 136A to the model handler 102 of the server 128 . . . the model handler 102 of the server 128 can identify portion(s) of the ML models 104 of which to retrain using the new/revised/updated weights. . . .the model handler 102 can distribute and/or otherwise deploy the first portion, and/or, more generally, the first one of the ML models 104, to one(s) of the nodes 108, 110, 112, 114, 116, 118, 120, 122 that correspond(s) to the first node context data 136A”, where the “train[ing]”, which involves operations, occurs by weighting data included in the dataset, “new, revised, or updated weights”, which include amounts of features that will be used by a model for further analysis).
Wouhaybi does not explicitly disclose . . . deleting a part of a cross feature amount, which is represented by a combination of explanatory variables, from . . . (where the initial training is not specifically described as comprising a deleting operation or cross feature amounts).
However, Naidoo teaches . . . [machine learning techniques, comprising] . . . deleting a part of a cross feature amount, which is represented by a combination of explanatory variables, from [the training data] . . . (Para. [0034], “consider a set of initial features that includes features F.sub.1, F.sub.2, F.sub.3, and F.sub.4. By processing numerical representations for the set of initial features across all of the plurality of predictive data entities using a cross-feature collinearity detection model, the predictive data analysis computing entity may first determine that the features F.sub.1 and F.sub.2 are co-linear. To address this cross-feature co-linearity, the predictive data analysis computing entity 106 may remove F.sub.1 from the set of initial features to generate a set of consolidated features that includes F.sub.2, F.sub.3, and F.sub.4”, where “features F.sub.1, F.sub.2, F.sub.3, and F.sub.4” are explanatory features because they are being “consider[ed]” and “process[ed]”, and where part of a “cross-feature” amount, “F.sub.1” from “F.sub.1 and F.sub.2”, is deleted, “remov[ed]”; see generally Para. [0017], “Various embodiments of the present invention disclose techniques for performing predictive prioritization that utilize the combination of a supervised trained machine learning model and an unsupervised trained machine learning model to infer relationships between key target features”).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the performance of the training, involving operations relating to the feature amount of the model of Wouhaybi with the machine learning techniques, comprising deleting a part of a cross feature amount, which is represented by a combination of explanatory variables, from the training data of Naidoo in order to learn predicative patterns in training data (Naidoo, Para. [0017], “the reliability advantage resulting from using both supervised learning and unsupervised learning to learn cross-feature relationships and feature refinements, which in turn enables learning predictive patterns even when using small amounts of labeled training data”), which allows for improved training performance by removing redundant data that can be inferred from the consolidated dataset (Naidoo, Para. [0034], “the predictive data analysis computing entity may first determine that the features F.sub.1 and F.sub.2 are co-linear. To address this cross-feature co-linearity, the predictive data analysis computing entity 106 may remove F.sub.1 from the set of initial features to generate a set of consolidated features that includes F.sub.2, F.sub.3, and F.sub.4”; Naidoo, Para. [0001], “Various embodiments of the present address the shortcomings of the predictive data analysis systems and disclose various techniques for efficiently and reliably performing predictive prioritization”).
Claims 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Wouhaybi in view of Spivak (“Automating Recommendation Engine Training with Amazon Personalize and AWS Glue”).
Regarding Claim 10, Wouhaybi teaches the information processing method according to claim 1, wherein the model is a prediction model used in a suggestion system . . . (Para. [0062], “predicted by the first ML model 138A”, where the “predict[ion]” “model” is used in a suggestion system, see Para. [0046], “The first node 108 can provide the sensor data as model input(s) to the first ML model 138A. The first ML model 138A can execute inference operations on the sensor data to produce and/or otherwise output model outputs, which can include a decision, a determination, a recommendation, etc., to carry out an action, operation, etc., in connection with . . . the first environment 124”, where “a decision, a determination, a recommendation, etc., to carry out an action, operation, etc.” is a suggestion),
and the characteristic of the second facility, which is represented by the one or more processors (Para. [0028], “Examples disclosed herein include clustered federated learning using context data . . . include[ing] . . . enabling multiple nodes to train a deep learning network based on data (e.g., measured data, observed data, live data, sensor data, etc.) observed by the nodes . . . and/or updated deep learning network weights to a central location” and Para. [0032], “context data can include . . . a physical location of the node . . . environmental data associated with the node . . . the central location can cluster nodes of an environment that are similar to each other with respect to their context data”, where characteristics are represented, “context data”, for a plurality of “environment[s]”, and where a dataset, which is used for training a model, is collected at each environment, “enabling multiple nodes to train a deep learning network based on data (e.g., measured data, observed data, live data, sensor data, etc.) observed by the nodes”; see also Fig. 1 and Para. [0034], “In some examples, the environments 124, 126 are representative of physical environments, such as commercial, industrial, public, and/or residential environments. For example, one(s) of the environments 124, 126 can be a commercial environment such as a bar and/or nightclub, a hospital, a movie theatre, a restaurant, a retail store, etc., and/or any combination(s) thereof. In some examples, one(s) of the environments 124, 126 can be an industrial environment . . . [or] a public environment”, where the “environments” are facilities, such as “retail store” facilities or “public” facilities; see also Fig. 4, where clusters, such as “CLUSTER 1” and “CLUSTER 2”, can be considered a first facility, and the remaining clusters, “CLUSTER 3” through “CLUSTER 4”, can be a plurality of second facilities; see also Para. [0010] – [0015] and Fig. 8-13, where the methods, as depicted by the “flowchart diagram[s]”, cause one or more processors to perform actions, “flowchart representative . . . that may be executed by example processor circuitry to implement the example”),
is a characteristic of a hypothetical facility assumed within a range of a characteristic of a facility (Fig. 1; Fig. 4; and Para. [0032], “context data can include . . . a physical location of the node . . . environmental data associated with the node . . . the central location can cluster nodes of an environment that are similar to each other with respect to their context data”, where the characteristic, “context data”, that the range of “node[s]” that fit with the “cluster”, corresponds with a facility, the environment assumed to be within the cluster, such as “Environment A 124”, which is hypothetical and constructed from data assumed to be “similar to each other with respect to their context data”, see Para. [0028], “Examples disclosed herein include clustered federated learning using context data . . . include[ing] . . . enabling multiple nodes to train a deep learning network based on data (e.g., measured data, observed data, live data, sensor data, etc.) observed by the nodes . . . and/or updated deep learning network weights to a central location”)
capable of being an introduction destination facility of the suggestion system (Fig. 1 and Para. [0068], “For example, the model trainer circuitry 230 can determine to retrain the first ML model 138A at a local node, such as the first node 108 where the first ML model 138A is to be deployed”, where “circuity” of the suggestion system can operationalize the “local node[s]”, such that they are capable of “deploy[ing]” the “model”, and therefore, the associated facilities, the actual locations that correspond with the reconstructed representations of hypothetical “ENVIRONMENT[‘S]”, are capable of being an introduction destination facility).
Wouhaybi does not explicitly disclose . . . that suggests an item to a user . . . (where the suggestions of the suggestion system are not specifically items to users).
However, Spivak teaches . . . [a machine learning system] that suggests an item to a user . . . (Pg. 1, Para. 1-2, “Customers from startups to enterprises observe increased revenue when personalizing customer interactions . . . This blog post demonstrates an approach for product recommendations . . . using historical datasets [and] . . . an ML-based recommendation engine”, where “personalizing customer interactions” for “product recommendations” requires that an item be suggested to a user; see also Pg. 2, Fig., where the “Recommendation API” for user suggestions is shown).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the prediction model used in a suggestion system of Wouhaybi with the machine learning system that suggests an item to a user of Spivak in order to use the machine learning models to generate predictions that will increase revenue at facilities (compare Wouhaybi, Para. [0034], “one(s) of the environments 124, 126 can be a commercial environment such as a bar and/or nightclub, a hospital, a movie theatre, a restaurant, a retail store, etc., and/or any combination(s) thereof” with Spivak, Pg. 1, Para. 1-2, “Customers from startups to enterprises observe increased revenue when personalizing customer interactions . . . This blog post demonstrates an approach for product recommendations . . . using historical datasets”).
Regarding Claim 11, Wouhaybi teaches the information processing method according to claim 1, wherein the dataset includes . . . [data] in the first facility (Para. [0028], “Examples disclosed herein include clustered federated learning using context data . . . include[ing] . . . enabling multiple nodes to train a deep learning network based on data (e.g., measured data, observed data, live data, sensor data, etc.) observed by the nodes . . . and/or updated deep learning network weights to a central location” and Para. [0032], “context data can include . . . a physical location of the node . . . environmental data associated with the node”).
Wouhaybi does not explicitly disclose . . . a behavior history of a plurality of users on a plurality of items . . . (where the contents of the dataset are not specifically described as containing behavior histories between users and items).
However, Spivak teaches . . . [using datasets, which include] a behavior history of a plurality of users on a plurality of items [for machine learning models] . . . (Pg. 1, Para. 2, “This blog post demonstrates an approach for product recommendations . . . using historical datasets” and Pg. 2, Para. 2, “Most companies already have existing historical datasets . . In the case of retail companies, the product order history is a good fit for interactions . . . The product and media catalogs map to the items dataset and the customer profiles to the user dataset”, where the behavioral history, “historical datasets” of “product order history”, of users on items, “product order interactions”, is used for “product recommendations”; see also Pg. 3, Para. 4, “USER_ID (string) and one additional metadata field for the users dataset ITEM_ID (string) and one additional metadata field for the items dataset USER_ID (string), ITEM_ID (string), TIMESTAMP (long; as Epoch time) for the interactions dataset”, where a plurality of “users”, “items”, and “interactions” between the two entities are included in the dataset).
Before the effective filing date of the invention, it would have been obvious to one of ordinary skill in the art to combine the use of datasets, which include data from a first facility of Wouhaybi with the use of datasets, which include behavioral histories of a plurality of users on a plurality of items, for machine learning models of Spivak in order to use the machine learning models to generate predictions that will increase revenue at the first facility (compare Wouhaybi, Para. [0034], “one(s) of the environments 124, 126 can be a commercial environment such as a bar and/or nightclub, a hospital, a movie theatre, a restaurant, a retail store, etc., and/or any combination(s) thereof” with Spivak, Pg. 1, Para. 1-2, “Customers from startups to enterprises observe increased revenue when personalizing customer interactions . . . This blog post demonstrates an approach for product recommendations . . . using historical datasets”).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MATTHEW BRYCE GOLAN whose telephone number is (571)272-5159. The examiner can normally be reached Monday through Friday, 8:00 AM to 5:00 PM ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alexey Shmatov can be reached at (571) 270-3428. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MATTHEW BRYCE GOLAN/Examiner, Art Unit 2123
/ALEXEY SHMATOV/Supervisory Patent Examiner, Art Unit 2123