DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
This action is in response to the arguments filed on 08/08//2025. Claims 1-5, 8, 10-17, and 20-22 are pending in the application and have been considered below.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-5, 8, 10-17, and 20-22 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Regarding Claim 1:
For Step 1, the claim is an apparatus so it does recite a statutory category of invention.
For Step 2A, Prong 1:
The claim recites the limitation of “generate, using a [trained content generation reinforcement learning model] and based on a set of features, a content data object customized for a target client.” The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites the limitation of “generate, based at least in part on the feedback
user experience content dataset, a plurality of exploratory feature sets each comprising one or more target client characteristics of the user experience content dataset, wherein each of the plurality of exploratory feature sets is generated based on at least one different [feature generation model] applied to the feedback user experience content dataset.” The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of “generate a normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on one or more content generation objectives.“ The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mental process. receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of “determine a plurality of feature labels of a dynamic framework feature set based at least in part on the normalized exploratory feature set score.” The determine limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the determine step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of “determine an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set.” The determine limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the determine step from practically being performed in the human mind. This limitation is a mental process.
The claim recites the limitation of “generating, [using the retrained content generation reinforcement learning model] and based on the updated set of features, a unique content data object customized for the new target client.” The generating limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generating step from practically being performed in the human mind. This limitation is a mental process.
For Step 2A, Prong 2, the claim recites additional elements: processors, storage devices,” receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object;” “transmit a visual representation of the unique content data objects to one or more user devices associated with the new target clients,” user devices, and “retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model.”
The processor is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component. (MPEP 2106.05(f)).
The additional element of “storage devices and user devices” are generic computer components that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The “receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object” (i.e., data gathering) step is a form of insignificant extra-solution activity because it is a mere nominal or tangential addition to the claim. See MPEP 2106.05(g).
The additional element of “content generation learning model is trained based at least in part on the dynamic framework feature set” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The additional element of “retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The “transmit a visual representation of the unique content data objects to one or more user devices associated with the new target clients” (i.e., data gathering) step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
Step 2B
The additional elements of “processors, storage devices, “content generation learning model is trained based at least in part on the dynamic framework feature set,” “retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model,” storage devices and user devices do not amount to significantly more for the reasons set forth in step 2A above.
Additionally, under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by (MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.
Here the “transmit a visual representation of the unique content data objects to one or more user devices associated with the new target clients” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by (MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “processors, storage devices, “content generation learning model is trained based at least in part on the dynamic framework feature set,” “retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model,” storage devices and user devices to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 2:
Claim 2, which incorporates the rejection of claim 1, recites further limitations such as “
generate, based at least in part on a list-based feature generation model, the one or more list-based exploratory feature sets; and generate, based at least in part on a rank-based feature generation model, the one or more rank-based exploratory feature sets,
wherein each exploratory set feature in the one or more rank-based exploratory
feature sets is associated with a rank-based feature score” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 3:
Claim 3, which incorporates the rejection of claim 2, recites additional elements:
a genetic feature selection algorithm or a chi-square feature selection algorithm.
The additional element of “genetic feature selection algorithm or chi-square feature selection algorithm” is a generic computer component that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The additional element of “genetic feature selection algorithm or chi-square feature selection algorithm” does not amount to significantly more for the reasons set forth in step 2A above.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “genetic feature selection algorithm or chi-square feature selection algorithm” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 4:
Claim 4, which incorporates the rejection of claim 1, recites further limitations such as “
generate a plurality of synthetic target features based at least in part on the target client characteristics” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 5:
Claim 5, which incorporates the rejection of claim 1, recites further limitations such as “
an exploratory feature set associated with a highest normalized exploratory feature set score” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 8:
Claim 8, which incorporates the rejection of claim 1, recites additional elements:
“receive a feedback user experience content dataset comprising interaction data from a
target client based at least in part on the visual representation of the plurality of content data objects presented to the target client” and user devices.
The “receive an updated feedback user experience content dataset comprising interaction data from the new target client based at least in part on the visual representation of the plurality of content data object presented to the new target client” (i.e., data gathering) step is a form of insignificant extra-solution activity because it is a mere nominal or tangential addition to the claim. See MPEP 2106.05(g).
The additional element of “user devices” is a generic computer component that amounts to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
Additionally, under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the “receive an updated feedback user experience content dataset comprising interaction data from the new target client based at least in part on the visual representation of the plurality of content data object presented to the new target client” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by Parker v. Flook, 437 U.S. 584, 588-89, 198 USPQ 193, 196 (1978) (MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “user devices” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 10:
Claim 10, which incorporates the rejection of claim 1, recites further limitations such as “
generate one or more screened exploratory feature sets by selecting a subset of
exploratory feature sets of the plurality of exploratory feature sets based at least in part on the normalized exploratory feature set score; and determine the plurality of feature labels of the dynamic framework feature set by selecting one or more screened set features from the one or more screened exploratory feature sets” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 11:
Claim 11, which incorporates the rejection of claim 1, recites further limitations such as “
select one or more exploratory set features of the plurality of exploratory feature
sets based on a correlation of exploratory set features between a subset of the plurality of exploratory feature sets” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 12:
Claim 12, which incorporates the rejection of claim 1, recites further limitations such as “
determine, using the feature selection machine learning model, one or more features labels of the dynamic framework feature sets from the plurality of exploratory features sets” that are part of the abstract idea.
The claim recites an additional element: “train a feature selection machine learning model based on the feedback user experience content dataset and the one or more content generation objectives.
The additional element of “train a feature selection machine learning model based on the feedback user experience content dataset and the one or more content generation objectives “is a generic computer component that amounts to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The additional element of “train a feature selection machine learning model based on the feedback user experience content dataset and the one or more content generation objectives” does not amount to significantly more for the reasons set forth in step 2A above.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional element of “train a feature selection machine learning model based on the feedback user experience content dataset and the one or more content generation objectives” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 13:
Claim 13, which incorporates the rejection of claim 1, recites further limitations such as “
generate a plurality of candidate dynamic framework feature sets, each candidate
dynamic framework feature set comprising at least one selected feature from the plurality of exploratory feature sets; generate a candidate dynamic framework feature set score for each candidate dynamic framework feature set in the plurality of candidate dynamic framework feature sets, wherein the candidate dynamic framework feature set score indicates a relative priority of each candidate dynamic framework feature set relative to the plurality of exploratory feature sets based at least in part on the one or more content generation objectives; and assign a candidate dynamic framework feature set from the plurality of candidate dynamic framework feature sets as the dynamic framework feature set based at least in part on the candidate dynamic framework feature set score” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 14:
Claim 14, which incorporates the rejection of claim 1, recites further limitations such as “
generate a first normalized exploratory feature set score for each exploratory feature set
of the plurality of exploratory feature sets, wherein the first normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on the first content generation objective; generate a second normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the second normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on the second content generation objective; generate a first dynamic framework feature set comprising a plurality of selected features of the feedback user experience content dataset by selecting one or more exploratory set features of the plurality of exploratory feature sets based at least in part on the first normalized exploratory feature set scores; and generate a second dynamic framework feature set comprising a plurality of selected features of the feedback user experience content dataset by selecting one or more exploratory set features of the plurality of exploratory feature sets based at least in part on the second normalized exploratory feature set scores” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 15:
For Step 1, the claim is a method, so it does recite a statutory category of invention.
For Step 2A, Prong 1:
The claim recites the limitation of “generating, using a [trained content generation reinforcement learning model] and based on a set of features, a content data object customized for a target client.” The generating limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generating step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites the limitation of “generating, based at least in part on the feedback
user experience content dataset, a plurality of exploratory feature sets each comprising one or more target client characteristics of the user experience content dataset, wherein each of the plurality of exploratory feature sets is generated based on at least one different [feature generation model] applied to the feedback user experience content dataset.” The generating limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generating step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites the limitation of “generating a normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on one or more content generation objectives.“ The generating limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generating step from practically being performed in the human mind. This limitation is a mental process. receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites the limitation of “determining a plurality of feature labels of a dynamic framework feature set based at least in part on the normalized exploratory feature set score.” The determining limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the determining step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of “determining an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set.” The determining limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the determining step from practically being performed in the human mind. This limitation is a mental process.
The claim recites the limitation of “generating, [using the retrained content generation reinforcement learning model] and based on the updated set of features, a unique content data object customized for the new target client.” The generating limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generating step from practically being performed in the human mind. This limitation is a mental process.
For Step 2A, Prong 2, the claim recites additional elements: processors, storage devices,” receiving a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object;” “transmitting a visual representation of the unique content data objects to one or more user devices associated with the new target clients,” user devices, and “retraining the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model.”
The processor is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component. (MPEP 2106.05(f)).
The additional element of “storage devices and user devices” are generic computer components that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The “receiving a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object” (i.e., data gathering) step is a form of insignificant extra-solution activity because it is a mere nominal or tangential addition to the claim. See MPEP 2106.05(g).
The additional element of “content generation learning model is trained based at least in part on the dynamic framework feature set” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The additional element of “retraining the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The “transmitting a visual representation of the unique content data objects to one or more user devices associated with the new target clients” (i.e., data gathering) step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
Step 2B
The additional elements of “processors, storage devices, “content generation learning model is trained based at least in part on the dynamic framework feature set,” “retraining the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model,” storage devices and user devices do not amount to significantly more for the reasons set forth in step 2A above.
Additionally, under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the receiving a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by (MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.
Here the “transmitting a visual representation of the unique content data objects to one or more user devices associated with the new target clients” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by (MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “processors, storage devices” and “content generation learning model is trained based at least in part on the dynamic framework feature set” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 16:
Claim 16, which incorporates the rejection of claim 15, recites further limitations such as “generating, based at least in part on a list-based feature generation model, the one or more list-based exploratory feature sets; and generating, based at least in part on a rank-based feature generation model, the one or more rank-based exploratory feature sets, wherein each exploratory set feature in the one or more rank-based exploratory
feature sets is associated with a rank-based feature score” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 17:
Claim 17, which incorporates the rejection of claim 15, recites further limitations such as “generating a plurality of synthetic target features based at least in part on the historical client characteristics” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Regarding Claim 20:
Claim 20, which incorporates the rejection of claim 15, recites ‘determining an updated dynamic framework feature set based at least in part on the feedback user experience content dataset “that are part of the abstract idea.
The claim recites additional elements:
“receiving an updated feedback user experience content dataset comprising interaction data from the new target client based at least in part on the visual representation of the plurality of content data object presented to the new target client” and user devices.
The “receiving an updated feedback user experience content dataset comprising interaction data from a target client based at least in part on the visual representation of the plurality of content data objects presented to the target client” (i.e., data gathering) step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
The additional element of “user devices” is a generic computer component that amounts to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The additional element of “retraining the content generation learning model based at least in part on the updated dynamic framework feature set” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under MPEP 2106.05(f)
Additionally, under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the “receiving an updated feedback user experience content dataset comprising interaction data from the new target client based at least in part on the visual representation of the plurality of content data object presented to the new target client” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by (MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “user devices” and “retraining the content generation learning model based at least in part on the updated dynamic framework feature set” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 21:
For Step 1, the claim is a computer program product, so it does recite a statutory category of invention.
For Step 2A, Prong 1:
The claim recites the limitation of “generate, using a [trained content generation reinforcement learning model] and based on a set of features, a content data object customized for a target client.” The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites the limitation of “generate, based at least in part on the feedback
user experience content dataset, a plurality of exploratory feature sets each comprising one or more target client characteristics of the user experience content dataset, wherein each of the plurality of exploratory feature sets is generated based on at least one different [feature generation model] applied to the feedback user experience content dataset.” The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites the limitation of “generate a normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on one or more content generation objectives.“ The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mental process. receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites the limitation of “determine a plurality of feature labels of a dynamic framework feature set based at least in part on the normalized exploratory feature set score.” The determine limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the determine step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites the limitation of “determine an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set.” The determine limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the determine step from practically being performed in the human mind. This limitation is a mental process.
The claim recites the limitation of “generating, [using the retrained content generation reinforcement learning model] and based on the updated set of features, a unique content data object customized for the new target client.” The generating limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generating step from practically being performed in the human mind. This limitation is a mental process.
For Step 2A, Prong 2, the claim recites additional elements: processors, storage devices,” receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object;” “transmit a visual representation of the unique content data objects to one or more user devices associated with the new target clients,” user devices, and “retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model.”
The processor is recited at a high level of generality, i.e., as a generic processor performing a generic computer function of processing data. This generic processor limitation is no more than mere instructions to apply the exception using a generic computer component. (MPEP 2106.05(f)).
The additional element of “storage devices and user devices” are generic computer components that amount to mere instructions to apply the abstract idea. See MPEP 2106.05(f).
The “receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object” (i.e., data gathering) step is a form of insignificant extra-solution activity because it is a mere nominal or tangential addition to the claim. See MPEP 2106.05(g).
The additional element of “content generation learning model is trained based at least in part on the dynamic framework feature set” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The additional element of “retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model” is a generic training recitation that may amount to a generic computer component to apply an abstract idea under 2106.05(f).
The “transmit a visual representation of the unique content data objects to one or more user devices associated with the new target clients” (i.e., data gathering) step is a form of insignificant extra-solution activity. See MPEP 2106.05(g).
Step 2B
The additional elements of “processors, storage devices, “content generation learning model is trained based at least in part on the dynamic framework feature set,” “retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model,” storage devices and user devices do not amount to significantly more for the reasons set forth in step 2A above.
Additionally, under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by (MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.
Here the “transmit a visual representation of the unique content data objects to one or more user devices associated with the new target clients” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by (MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “processors, storage devices” and “content generation learning model is trained based at least in part on the dynamic framework feature set” to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Regarding Claim 22:
Claim 22, which incorporates the rejection of claim 21, recites further limitations such as “generating, based at least in part on a list-based feature generation model, the one or more list-based exploratory feature sets; and generating, based at least in part on a rank-based feature generation model, the one or more rank-based exploratory feature sets, wherein each exploratory set feature in the one or more rank-based exploratory
feature sets is associated with a rank-based feature score” that are part of the abstract idea.
There are no additional elements recited in this claim that amount to an integration of the judicial exception into a practical application or significantly more than the judicial exception. Therefore, the claim is not eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1, 5, 15 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Lohiya et al. (US 2025/0225375 A1, hereinafter referred to as Lohiya), in view of Schmidt et al. (US 2024/0338554 A1, hereinafter referred to as Schmidt), and further in view of KLAFTER et al. (US 2025/0156634 A1, hereinafter referred to as KLAFTER), and Wunderlich et al. (US 11,431,664 B2, hereinafter referred to as Wunderlich).
As to claim 1, Lohiya teaches an apparatus comprising one or more processors and one or more storage devices storing instructions that are operable, when executed by the one or more processors, to cause the one or more processors to:
generate, using a trained content generation reinforcement learning model and based on a set of features, a content data object customized for a target client (paragraphs [0032] the content generation model is trained and/or tuned using reinforcement learning (RL) techniques. [0042], generate content for a targeted audience configured to elicit a particular recipient response. The content generation system uses AI/ML models trained by learning both consumer and content embeddings from historical interaction data across different modalities (e.g., numeric data, such as quantifiable content interactions (clicks, reads, interaction duration, etc.), product interactions, product downloads; text data, such as segment labels, surveys and survey interactions; categorical or demographic data, such as geography, funnel stage, age, experience, education, income level, gender, and/or the like) …; using reinforcement learning to generate content; [0046], "reinforcement learning" is an AI/ML model training technique that uses a reward (or trial-and-error) feedback system to improve the output of the AI/ML mode; [0057], the model training module 122 includes a content generation training module 132 configured to train a language model to generate content based on reinforcement learning using the performance prediction model trained via the performance training module (see, for example, FIGS. 4 and 5); [0097] As shown in FIG. 7, the content generation module 124 uses a trained content generation model 140);
receive a feedback user experience content dataset comprising client characteristics related to a plurality of clients, and based on interaction data from the target client relative to the content data object (paragraphs [0042]-[0043] The content generation system uses AI/ML models trained by learning both consumer and content embeddings from historical interaction data across different modalities ( e.g., numeric data, such as quantifiable content interactions (clicks, reads, interaction duration, etc.), product interactions, product downloads; text data, such as segment labels, surveys and survey interactions; categorical or demographic data, such as geography, funnel stage, age, experience, education, income level, gender, and/or the like). The trained AI/ML models operate to score the pairing of content and audience segment based on one or more specific performance objectives (e.g., KPIs) and, in some embodiments, refines a language model (e.g., LLM) using reinforcement learning to generate content; wherein Examiner interprets the generated content to include content data object …; [0130], “receives as input the collected data”; [0142] “customer feedback”; [0147] “feedback 1618”; [0148] “feedback 1620”. Based on paragraphs [0065]-[0066] of the original disclosure (specification), Examiner interprets “categorical or demographic data, such as geography, funnel stage, age, experience, education, income level, gender, and/or the like” as client characteristics related to clients. Examiner further interprets the quantifiable content interactions, product interactions, product downloads; text data, such as segment labels, surveys, survey interactions and “customer feedback”) to describe how users perceive and interact with content data object).
Lohiya teaches feedback ([0148] and Fig. 16 element 1620). However, Lohiya fails to explicitly teach:
generate, based at least in part on the feedback user experience content dataset, a plurality of exploratory feature sets each comprising one or more of the client characteristics of the feedback user experience content dataset, wherein each of the plurality of exploratory feature sets is generated based on at least one different feature generation model applied to the feedback user experience content dataset; and
generate a normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on one or more content generation objectives.
Schmidt, in combination with Lohiya, teaches:
generate, based at least in part on the feedback user experience content dataset, a plurality of exploratory feature sets each comprising one or more of the client characteristics of the feedback user experience content dataset, wherein each of the plurality of exploratory feature sets is generated based on at least one different feature generation model applied to the feedback user experience content dataset (paragraphs [0092], exploratory data analysis (EDA) data for each of the selected features; [0175] the feature importance module 1941 may determine univariate feature importance scores for one or more ( e.g., all) the features of a dataset
during the exploratory data analysis phase of the model development process. In some embodiments, permutation importance techniques are generally used to determine the
importance of tabular features);
generate a normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on one or more content generation objectives (paragraphs [0180], the feature impacts may be normalized. For example, the feature impacts may be normalized so that the highest feature impact is 100%..., the N greatest normalized feature impact scores may be retained; [0186] With respect to image data, exploratory data analysis operations may include, without limitation, automated assessment of image data quality ( e.g., determining the feature importance of the candidate image features, detecting
duplicates in the image data using image similarity techniques, detecting missing images, detecting broken image links, detecting unreadable images, etc.), and target aware previewing of image data (e.g., displaying examples of images per class for classification problems, automated drilldown into images associated with different target subranges for regression problems, etc.). The feature importance of a candidate image feature may be, for example, the feature's univariate feature importance as discussed above.); and
determine a plurality of feature labels of a dynamic framework feature set based at least in part on the normalized exploratory feature set scores (paragraph [0180], the feature impacts may be normalized. For example, the feature impacts may be normalized so that the highest feature impact is 100%..., the N greatest normalized feature impact scores (interpreted by Examiner as normalized exploratory feature set scores) may be retained”).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the system of Lohiya to add “content generation objectives” to the system of Lohiya, as taught by Schmidt, above. The modification would have been obvious because one of ordinary skill would be motivated to classify feature based on feature's importance score, as suggested by Schmidt, ([0189]).
However, Lohiya and Schmidt fail to explicitly teach:
generate, using a trained content generation reinforcement learning model and based on a set of features, a content data object customized for a target client;
retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model;
determine an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set;
generate, using the retrained content generation reinforcement learning model and based on the updated set of features, a unique content data object customized the new target client; and
transmit a visual representation of the unique content data object to one or more user devices associated with the new target client.
Lohiya teaches a trained content generation, However, Lohiya and Schmidt fail to explicitly teach:
retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model;
determine an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set;
generate, using the retrained content generation reinforcement learning model and based on the updated set of features, a unique content data object customized the new target client; and
transmit a visual representation of the unique content data object to one or more user devices associated with the new target client.
KLAFTER, in combination with Lohiya and Schmidt, teaches:
retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model (paragraphs [0006]-[008], a model (interpreted as a trained content generation by Examiner) or LLM may be updated or retrained using a reinforcement learning approach and based output items or refined responses generated by that model; model (interpreted as a trained content generation by Examiner) or LLM may be trained, retrained or refined using such specific datasets (Examiner interprets “specific datasets “ to include feature labels of the dynamic framework feature set) to provide optimized outputs and/or refined responses for that given topic or domain (as represented, e.g., by the parameters with which the relevant dataset may be associated));
determine an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set (paragraph [007], reward or cost functions to update model parameters. Updating or retraining a model or LLM according to some embodiments may be performed automatically…; wherein Examiner interprets “update model parameters” to include an updated set of features);
generate, using the retrained content generation reinforcement learning model and based on the updated set of features, a unique content data object customized the new target client (paragraph [0060], revise or update training datasets…. The more data generated and used for training or retraining the model; [0069]-[0073], wherein Examiner interprets a reinforcement learning training, tuning, or
retraining process, and/or training or tuning a machine learning model (interpreted as a retrained content generation reinforcement learning model by Examiner) to teach the limitation).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya and Schmidt to add a trained content generation reinforcement learning model to the combination system of Lohiya and Schmidt, as taught by KLAFTER, above. The modification would have been obvious because one of ordinary skill would be motivated to enhance the overall quality and/or reliability of AI-generated content, as suggested by KLAFTER, ([0004]).
However, Lohiya, Schmidt and KLAFTER fail to explicitly teach:
transmit a visual representation of the unique content data object to one or more user
devices associated with the new target client
Wunderlich, in combination with Lohiya, Schmidt and KLAFTER, teaches:
transmit a visual representation of the unique content data object to one or more user
devices associated with the new target client (col. 30, lines 33-40, reinforcement learning; col. 20, lines 4-19, “transmit to administrator 302 a visual representation of
receive responses such as a pie chart, bar chart, or heat map; ” and “computing device 114 may be configured to transmit messages back and forth between administrator 302 and recipient 402 via EC computing device 102 and recipient computing device 104; col. 30, lines 33-40, “a reinforced or reinforcement learning module or program”)
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of Lohiya, Schmidt and KLAFTER to add a visual representation transmission to the system of Lohiya, Schmidt and KLAFTER, as taught by Jiang above. The modification would have been obvious because one of ordinary skill would be motivated to enable bilateral communication, in real time, with mass groups of individuals without compromising the security of each individual's contact information, as suggested by Wunderlich, (col.1, lines 45-47).
As to claim 5, which incorporates the rejection of claim 1, Schmidt teaches wherein the dynamic framework feature set comprises an exploratory feature set associated with a highest normalized exploratory feature set score ((paragraph [0180], the feature impacts may be normalized. For example, the feature impacts may be normalized so that the highest feature impact is 100%..., the N greatest normalized feature impact scores (interpreted by Examiner as normalized exploratory feature set scores) may be retained”).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the system of Lohiya to add “content generation objectives” to the system of Lohiya, as taught by Schmidt, above. The modification would have been obvious because one of ordinary skill would be motivated to classify feature based on feature's importance score, as suggested by Schmidt, ([0189]).
As to claim 15, Lohiya teaches a computer-implemented method, comprising:
generating, using a trained content generation reinforcement learning model and based on a set of features, a content data object customized for a target client (paragraphs [0032] the content generation model is trained and/or tuned using reinforcement learning (RL) techniques. [0042],], generate content for a targeted audience configured to elicit a particular recipient response. The content generation system uses AI/ML models trained by learning both consumer and content embeddings from historical interaction data across different modalities (e.g., numeric data, such as quantifiable content interactions (clicks, reads, interaction duration, etc.), product interactions, product downloads; text data, such as segment labels, surveys and survey interactions; categorical or demographic data, such as geography, funnel stage, age, experience, education, income level, gender, and/or the like) …; using reinforcement learning to generate content; [0046], "reinforcement learning" is an AI/ML model training technique that uses a reward (or trial-and-error) feedback system to improve the output of the AI/ML mode; [0097] As shown in FIG. 7, the content generation module 124 uses a trained content generation model 140).
receiving a feedback user experience content dataset comprising client characteristics related to a plurality of clients, and based on interaction data from the target client relative to the content data object (paragraphs 0042] The content generation system uses AI/ML models trained by learning both consumer and content embeddings from historical interaction data across different modalities ( e.g., numeric data, such as quantifiable content interactions (clicks, reads, interaction duration, etc.), product interactions, product downloads; text data, such as segment labels, surveys and survey interactions; categorical or demographic data, such as geography, funnel stage, age, experience, education, income level, gender, and/or the like). The trained
AI/ML models operate to score the pairing of content and audience segment based on one or more specific performance objectives (e.g., KPIs) and, in some embodiments,
refines a language model (e.g., LLM) using reinforcement learning to generate content; wherein Examiner interprets the generated content to include content data object … [0130], “receives as input the collected data”; [0142] “customer feedback”; [0147] “feedback 1618”; [0148] “feedback 1620”; wherein Examiner interprets the “the collected data,” “customer feedback” and “feedbacks 1618 and 1620.” Based on paragraphs [0065]-[0066] of the original disclosure (specification), Examiner interprets “categorical or demographic data, such as geography, funnel stage, age, experience, education, income level, gender, and/or the like” as client characteristics related to clients. Examiner further interprets the quantifiable content interactions, product interactions, product downloads; text data, such as segment labels, surveys, survey interactions and “customer feedback”) to describe how users perceive and interact with content data object).
Lohiya teaches feedback ([0148] and Fig. 16 element 1620). However, Lohiya fails to explicitly teach:
generating, based at least in part on the feedback user experience content dataset, a plurality of exploratory feature sets each comprising one or more of the client characteristics of the feedback user experience content dataset, wherein each of the plurality of exploratory feature sets is generated based on at least one different feature generation model applied to the feedback user experience content dataset; and
generating a normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on one or more content generation objectives
Schmidt, in combination with Lohiya, teaches:
generating, based at least in part on the feedback user experience content dataset, a plurality of exploratory feature sets each comprising one or more of the client characteristics of the feedback user experience content dataset, wherein each of the plurality of exploratory feature sets is generated based on at least one different feature generation model applied to the feedback user experience content dataset (paragraphs [0092], exploratory data analysis (EDA) data for each of the selected features; [0175] the feature importance module 1941 may determine univariate feature importance scores for one or more ( e.g., all) the features of a dataset
during the exploratory data analysis phase of the model development process. In some embodiments, permutation importance techniques are generally used to determine the
importance of tabular features);
generating a normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on one or more content generation objectives (paragraphs [0180], the feature impacts may be normalized. For example, the feature impacts may be normalized so that the highest feature impact is 100%..., the N greatest normalized feature impact scores may be retained; [0186] With respect to image data, exploratory data analysis operations may include, without limitation, automated assessment of image data quality ( e.g., determining the feature importance of the candidate image features, detecting duplicates in the image data using image similarity techniques, detecting missing images, detecting broken image links, detecting unreadable images, etc.), and target aware previewing of image data (e.g., displaying examples of images per class for classification problems, automated drilldown into images associated with different target subranges for regression problems, etc.). The feature importance of a candidate image feature may be, for example, the feature's univariate feature importance as discussed above.); and
determining a plurality of feature labels of a dynamic framework feature set based at least in part on the normalized exploratory feature set scores (paragraph [[0180], the feature impacts may be normalized. For example, the feature impacts may be normalized so that the highest feature impact is 100%..., the N greatest normalized feature impact scores (interpreted by Examiner as normalized exploratory feature set scores) may be retained.)
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the system of Lohiya to add “content generation objectives” to the system of Lohiya, as taught by De Ridder, above. The modification would have been obvious because one of ordinary skill would be motivated to classify feature based on feature's importance score, as suggested by Schmidt, ([0189]).
However, Lohiya and Schmidt fail to explicitly teach:
generating, using a trained content generation reinforcement learning model and based on a set of features, a content data object customized for a target client;
retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model;
determining an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set;
generating, using the retrained content generation reinforcement learning model and based on the updated set of features, a unique content data object customized the new target client; and
transmitting a visual representation of the unique content data object to one or more user devices associated with the new target client.
Lohiya teaches a trained content generation, However, Lohiya and De Ridder fail to explicitly teach:
retraining the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model;
determining an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set;
generating, using the retrained content generation reinforcement learning model and based on the updated set of features, a unique content data object customized the new target client; and
transmit a visual representation of the unique content data object to one or more user devices associated with the new target client.
KLAFTER, in combination with Lohiya and Schmidt, teaches:
retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model (paragraphs [0006]-[008], a model (interpreted as a trained content generation by Examiner) or LLM may be updated or retrained using a reinforcement learning approach and based output items or refined responses generated by that model; model (interpreted as a trained content generation by Examiner) or LLM may be trained, retrained or refined using such specific datasets to provide optimized outputs and/or refined responses for that given topic or domain (as represented, e.g., by the parameters with which the relevant dataset may be associated));
determining an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set (paragraph [007], reward or cost functions to update model parameters. Updating or retraining a model or LLM according to some embodiments may be performed automatically…; wherein Examiner interprets “update model parameters” to teach the limitation);
generate, using the retrained content generation reinforcement learning model and based on the updated set of features, a unique content data object customized the new target client (paragraph [0060], revise or update training datasets…. The more data generated and used for training or retraining the model; [0069]-[0073], wherein Examiner interprets a reinforcement learning training, tuning, or
retraining process, and/or training or tuning a machine learning model (interpreted as a retrained content generation reinforcement learning model by Examiner) to teach the limitation).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya and Schmidt to add a trained content generation reinforcement learning model to the combination system of Lohiya and Schmidt, as taught by KLAFTER, above. The modification would have been obvious because one of ordinary skill would be motivated to,
enhance the overall quality and/or reliability of AI-generated content, as suggested by KLAFTER, ([0004]).
However, Lohiya, Schmidt and KLAFTER fail to explicitly teach:
transmitting a visual representation of the unique content data object to one or more user devices associated with the new target client
Wunderlich, in combination with Lohiya, Schmidt and KLAFTER, teaches:
transmitting a visual representation of the unique content data object to one or more user devices associated with the new target client (paragraph [0047] Reinforcement Learning; col. 20, lines 4-19, “transmit to administrator 302 a visual representation of
receive responses such as a pie chart, bar chart, or heat map; ” and “computing device 114 may be configured to transmit messages back and forth between administrator 302 and recipient 402 via EC computing device 102 and recipient computing device 104; col. 30, lines 33-40, “a reinforced or reinforcement learning module or program”)
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of Lohiya, Schmidt and KLAFTER to add a visual representation transmission to the system of Lohiya, Schmidt and KLAFTER, as taught by Jiang above. The modification would have been obvious because one of ordinary skill would be motivated to enable bilateral communication, in real time, with mass groups of individuals without compromising the security of each individual's contact information, as suggested by Wunderlich, (col.1, lines 45-47).
As to claim 21, Lohiya teaches a computer program product for determining a dynamic framework feature set for a learning framework, the computer program product comprising at least one non-transitory computer-readable storage medium having computer-readable program code portions stored therein, the computer-readable program code portions comprising an executable portion configured to:
generate, using a trained content generation reinforcement learning model and based on a set of features, a content data object customized for a target client (paragraphs [0032] the content generation model is trained and/or tuned using reinforcement learning (RL) techniques; [0042], generate content for a targeted audience configured to elicit a particular recipient response. The content generation system uses AI/ML models trained by learning both consumer and content embeddings from historical interaction data across different modalities (e.g., numeric data, such as quantifiable content interactions (clicks, reads, interaction duration, etc.), product interactions, product downloads; text data, such as segment labels, surveys and survey interactions; categorical or demographic data, such as geography, funnel stage, age, experience, education, income level, gender, and/or the like) …; using reinforcement learning to generate content; [0046], "reinforcement learning" is an AI/ML model training technique that uses a reward (or trial-and-error) feedback system to improve the output of the AI/ML mode; [0057], the model training module 122 includes a content generation training module 132 configured to train a language model to generate content based on reinforcement learning using the performance prediction model trained via the performance training module (see, for example, FIGS. 4 and 5); [0097] As shown in FIG. 7, the content generation module 124 uses a trained content generation model 140).
receive a feedback user experience content dataset comprising client characteristics related to a plurality of clients, and based on interaction data from the target client relative to the content data object (paragraphs [0042] The content generation system uses AI/ML models trained by learning both consumer and content embeddings from historical interaction data across different modalities ( e.g., numeric data, such as quantifiable content interactions (clicks, reads, interaction duration, etc.), product interactions, product downloads; text data, such as segment labels, surveys and survey interactions; categorical or demographic data, such as geography, funnel stage, age, experience, education, income level, gender, and/or the like). The trained
AI/ML models operate to score the pairing of content and audience segment based on one or more specific performance objectives (e.g., KPIs) and, in some embodiments,
refines a language model (e.g., LLM) using reinforcement learning to generate content; wherein Examiner interprets the generated content to include content data object … [0130], “receives as input the collected data”; [0142] “customer feedback”; [0147] “feedback 1618”; [0148] “feedback 1620”; wherein Examiner interprets the “the collected data,” “customer feedback” and “feedbacks 1618 and 1620.” Based on paragraphs [0065]-[0066] of the original disclosure (specification), Examiner interprets “categorical or demographic data, such as geography, funnel stage, age, experience, education, income level, gender, and/or the like” as client characteristics related to clients. Examiner further interprets the quantifiable content interactions, product interactions, product downloads; text data, such as segment labels, surveys, survey interactions and “customer feedback”) to describe how users perceive and interact with content data object).
Lohiya teaches feedback ([0148] and Fig. 16 element 1620). However, Lohiya fails to explicitly teach:
generate, based at least in part on the feedback user experience content dataset, a plurality of exploratory feature sets each comprising one or more of the client characteristics of the feedback user experience content dataset, wherein each of the plurality of exploratory feature sets is generated based on at least one different feature generation model applied to the feedback user experience content dataset; and
generate a normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on one or more content generation objectives
Schmidt, in combination with Lohiya, teaches:
generate, based at least in part on the feedback user experience content dataset, a plurality of exploratory feature sets each comprising one or more of the client characteristics of the feedback user experience content dataset, wherein each of the plurality of exploratory feature sets is generated based on at least one different feature generation model applied to the feedback user experience content dataset (paragraphs [0092], exploratory data analysis (EDA) data for each of the selected features; [0175] the feature importance module 1941 may determine univariate feature importance scores for one or more ( e.g., all) the features of a dataset during the exploratory data analysis phase of the model development process. In some embodiments, permutation importance techniques are generally used to determine the importance of tabular features);
generate a normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on one or more content generation objectives (paragraphs [0180], the feature impacts may be normalized. For example, the feature impacts may be normalized so that the highest feature impact is 100%..., the N greatest normalized feature impact scores may be retained; [0186] With respect to image data, exploratory data analysis operations may include, without limitation, automated assessment of image data quality ( e.g., determining the feature importance of the candidate image features, detecting
duplicates in the image data using image similarity techniques, detecting missing images, detecting broken image links, detecting unreadable images, etc.), and target aware previewing of image data (e.g., displaying examples of images per class for classification problems, automated drilldown into images associated with different target subranges for regression problems, etc.). The feature importance of a candidate image feature may be, for example, the feature's univariate feature importance as discussed above.); and
determine a plurality of feature labels of a dynamic framework feature set based at least in part on the normalized exploratory feature set scores (paragraph [0180], the feature impacts may be normalized. For example, the feature impacts may be normalized so that the highest feature impact is 100%..., the N greatest normalized feature impact scores (interpreted by Examiner as normalized exploratory feature set scores) may be retained).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the system of Lohiya to add “content generation objectives” to the system of Lohiya, as taught by Schmidt,, above. The modification would have been obvious because one of ordinary skill would be motivated to leverage a generative-AI, for content generation-related tasks, as suggested by Schmidt,, ([0003]).
However, Lohiya and Schmidt fail to explicitly teach:
generate, using a trained content generation reinforcement learning model and based on a set of features, a content data object customized for a target client;
retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model;
determine an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set;
generate, using the retrained content generation reinforcement learning model and based on the updated set of features, a unique content data object customized the new target client; and
transmit a visual representation of the unique content data object to one or more user devices associated with the new target client.
Lohiya teaches a trained content generation. However, Lohiya and De Ridder fail to explicitly teach:
retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model;
determine an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set;
generate, using the retrained content generation reinforcement learning model and based on the updated set of features, a unique content data object customized the new target client; and
transmit a visual representation of the unique content data object to one or more user devices associated with the new target client.
KLAFTER, in combination with Lohiya and Schmidt, teaches:
retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model (paragraphs [0006]-[008], a model (interpreted as a trained content generation by Examiner) or LLM may be updated or retrained using a reinforcement learning approach and based output items or refined responses generated by that model; model (interpreted as a trained content generation by Examiner) or LLM may be trained, retrained or refined using such specific datasets to provide optimized outputs and/or refined responses for that given topic or domain (as represented, e.g., by the parameters with which the relevant dataset may be associated));
determine an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set (paragraph [007], reward or cost functions to update model parameters. Updating or retraining a model or LLM according to some embodiments may be performed automatically…; wherein Examiner interprets “update model parameters” to teach the limitation);
generate, using the retrained content generation reinforcement learning model and based on the updated set of features, a unique content data object customized the new target client (paragraph [0060], revise or update training datasets…. The more data generated and used for training or retraining the model; [0069]-[0073], wherein Examiner interprets a reinforcement learning training, tuning, or
retraining process, and/or training or tuning a machine learning model (interpreted as a retrained content generation reinforcement learning model by Examiner) to teach the limitation).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya and Schmidt to add a trained content generation reinforcement learning model to the combination system of Lohiya and Schmidt, as taught by KLAFTER, above. The modification would have been obvious because one of ordinary skill would be motivated to,
enhance the overall quality and/or reliability of AI-generated content, as suggested by KLAFTER, ([0004]).
However, Lohiya, Schmidt and KLAFTER fail to explicitly teach:
transmit a visual representation of the unique content data object to one or more user
devices associated with the new target client
Wunderlich, in combination with Lohiya, Schmidt and KLAFTER, teaches:
transmit a visual representation of the unique content data object to one or more user
devices associated with the new target client (col. 30, lines 33-40, reinforcement learning; col. 20, lines 4-19, “transmit to administrator 302 a visual representation of
receive responses such as a pie chart, bar chart, or heat map; ” and “computing device 114 may be configured to transmit messages back and forth between administrator 302 and recipient 402 via EC computing device 102 and recipient computing device 104; col. 30, lines 33-40, “a reinforced or reinforcement learning module or program”)
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of Lohiya, Schmidt and KLAFTER to add a visual representation transmission to the system of Lohiya, Schmidt and KLAFTER, as taught by Jiang above. The modification would have been obvious because one of ordinary skill would be motivated to enable bilateral communication, in real time, with mass groups of individuals without compromising the security of each individual's contact information, as suggested by Wunderlich, (col.1, lines 45-47).
Claims 2, 16, and 22 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Lohiya et al. (US 2025/0225375 A1, hereinafter referred to as Lohiya), in view of Schmidt et al. (US 2024/0338554 A1, hereinafter referred to as Schmidt), and further in view of KLAFTER et al. (US 2025/0156634 A1, hereinafter referred to as KLAFTER), and Wunderlich et al. (US 11,431,664 B2, hereinafter referred to as Wunderlich), and KARNAGEL et al. (US 2020/0327357 A1, hereinafter referred to as KARNAGEL).
As to claim 2, which incorporates the rejection of claim 1, Lohiya, Schmidt, KLAFTER and Wunderlich fail to explicitly teach wherein the plurality of exploratory feature sets comprises one or more list-based exploratory feature sets and one or more rank-based exploratory feature sets, and the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
generate, based at least in part on a list-based feature generation model, the one or more list-based exploratory feature sets; and
generate, based at least in part on a rank-based feature generation model, the one or more rank-based exploratory feature sets, wherein each exploratory set feature in the one or more rank-based exploratory feature sets is associated with a rank-based feature score.
KARNAGEL, in combination with Lohiya, Schmidt, KLAFTER and Wunderlich, teaches wherein the plurality of exploratory feature sets comprises one or more list-based exploratory feature sets (paragraph [0018] Feature-based ranking can be used to consider only subsets with more important features) and one or more rank-based exploratory feature sets (paragraphs [0020]-[0021] …A rank based on relevance scores of the features is calculated for each feature…), and the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
generate, based at least in part on a list-based feature generation model, the one or more list-based exploratory feature sets (paragraphs [0018]-[0020]…Feature-based ranking can be used to consider only subsets with more important features, which are those features that are well correlated to the inferences and accuracy of the ML model, ensemble ranking that combines multiple feature rankings into a new ranking. For example, an ensemble ranking can be based on an average of feature scores of all other rankings); and
generate, based at least in part on a rank-based feature generation model, the one or more rank-based exploratory feature sets, wherein each exploratory set feature in the one or more rank-based exploratory feature sets is associated with a rank-based feature score (paragraphs [0020]-[0021] for each feature of a training dataset, a relevance score based on: a relevance scoring function, and statistics of values, of the feature, that occur in the training dataset. A rank based on relevance scores of the features is calculated for each feature).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, to add rank-based exploratory feature sets to the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, as taught by KARNAGEL above. The modification would have been obvious because one of ordinary skill would be motivated to use an ensemble ranking that combines multiple feature rankings into a new ranking, as suggested by KARNAGEL, ([0020]).
As to claim 16, which incorporates the rejection of claim 15, Lohiya, Schmidt, KLAFTER and Wunderlich fail to explicitly teach wherein the plurality of exploratory feature sets comprises one or more list-based exploratory feature sets and one or more rank-based exploratory feature sets, and the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
generating, based at least in part on a list-based feature generation model, the one or more list-based exploratory feature sets; and
generating, based at least in part on a rank-based feature generation model, the one or more rank-based exploratory feature sets, wherein each exploratory set feature in the one or more rank-based exploratory feature sets is associated with a rank-based feature score.
KARNAGEL, in combination with Lohiya, Schmidt, KLAFTER and Wunderlich, teaches:
wherein the plurality of exploratory feature sets comprises one or more list-based exploratory feature sets (paragraph [0018] Feature-based ranking can be used to consider only subsets with more important features) and one or more rank-based exploratory feature sets (paragraphs [0020]-[0021] A rank based on relevance scores of the features is calculated for each feature…), the computer-implemented method further comprising:
generating, based at least in part on a list-based feature generation model, the one or more list-based exploratory feature sets (paragraphs [0018]-[0020]…Feature-based ranking can be used to consider only subsets with more important features, which are those features that are well correlated to the inferences and accuracy of the ML model…ensemble ranking that combines multiple feature rankings into a new ranking. For example, an ensemble ranking can be based on an average of feature scores of all other rankings); and
generating, based at least in part on a rank-based feature generation model, the one or more rank-based exploratory feature sets, wherein each exploratory set feature in the one or more rank-based exploratory feature sets is associated with a rank-based feature score (paragraphs [0020]-[0021], for each feature of a training dataset, a relevance score based on: a relevance scoring function, and statistics of values, of the feature, that occur in the training dataset. A rank based on relevance scores of the features is calculated for each feature).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, to add rank-based exploratory feature sets to the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, as taught by KARNAGEL above. The modification would have been obvious because one of ordinary skill would be motivated to use an ensemble ranking that combines multiple feature rankings into a new ranking, as suggested by KARNAGEL, ([0020]).
As to claim 22, which incorporates the rejection of claim 21, Lohiya, Schmidt, KLAFTER and Wunderlich fail to explicitly teach wherein the plurality of exploratory feature sets comprises one or more list-based exploratory feature sets and one or more rank-based exploratory feature sets, the executable portion of the computer program product is further configured to:
generate, based at least in part on a list-based feature generation model, the one or more list-based exploratory feature sets; and
generate, based at least in part on a rank-based feature generation model, the one or more rank-based exploratory feature sets, wherein each exploratory set feature in the one or more rank-based exploratory feature sets is associated with a rank-based feature score.
KARNAGEL, in combination with Lohiya, Schmidt, KLAFTER and Wunderlich, teaches wherein the plurality of exploratory feature sets comprises one or more list-based exploratory feature sets (paragraph [0018] Feature-based ranking can be used to consider only subsets with more important features) and one or more rank-based exploratory feature sets (paragraphs [0020]-[0021] A rank based on relevance scores of the features is calculated for each feature), and the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
generate, based at least in part on a list-based feature generation model, the one or more list-based exploratory feature sets (paragraphs [0018]-[0020]…Feature-based ranking can be used to consider only subsets with more important features, which are those features that are well correlated to the inferences and accuracy of the ML model…ensemble ranking that combines multiple feature rankings into a new ranking. For example, an ensemble ranking can be based on an average of feature scores of all other rankings); and
generate, based at least in part on a rank-based feature generation model, the one or more rank-based exploratory feature sets, wherein each exploratory set feature in the one or more rank-based exploratory feature sets is associated with a rank-based feature score (paragraphs [0020]-[0021], for each feature of a training dataset, a relevance score based on: a relevance scoring function, and statistics of values, of the feature, that occur in the training dataset. A rank based on relevance scores of the features is calculated for each feature).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya, De Ridder, KLAFTER and Wunderlich, to add rank-based exploratory feature sets to the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, as taught by KARNAGEL above. The modification would have been obvious because one of ordinary skill would be motivated to use an ensemble ranking that combines multiple feature rankings into a new ranking, as suggested by KARNAGEL, ([0020]).
Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Lohiya et al. (US 2025/0225375 A1, hereinafter referred to as Lohiya), in view of Schmidt et al. (US 2024/0338554 A1, hereinafter referred to as Schmidt), and further in view of KLAFTER et al. (US 2025/0156634 A1, hereinafter referred to as KLAFTER), and Wunderlich et al. (US 11,431,664 B2, hereinafter referred to as Wunderlich), and KARNAGEL et al. (US 2020/0327357 A1, hereinafter referred to as KARNAGEL), and Kang et al. (US 2022/0399081 A1, hereinafter referred to as Kang).
As to claim 3, which incorporates the rejection of claim 2, Lohiya, Schmidt, KLAFTER, Wunderlich and KARNAGEL fail to explicitly teaches wherein the list-based feature generation model comprises a genetic feature selection algorithm or a chi-square feature selection algorithm.
However, Kang, in combination with Lohiya, Schmidt, KLAFTER, Wunderlich and KARNAGEL, teaches wherein the list-based feature generation model comprises a genetic feature selection algorithm or a chi-square feature selection algorithm (paragraph [0008] feature selection based on a genetic algorithm).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya, Schmidt, KLAFTER, Wunderlich and KARNAGEL, to add a genetic feature selection algorithm to the system of Lohiya, Schmidt, KLAFTER, Wunderlich and KARNAGEL, as taught by Kang above. The modification would have been obvious because one of ordinary skill would be motivated to calculating a prediction accuracy for each of the feature models as a prediction result, as suggested by Kang, ([0020]).
Claims 4 and 17 are s/are rejected under 35 U.S.C. 103 as being unpatentable over in view Lohiya et al. (US 2025/0225375 A1, hereinafter referred to as Lohiya), in view of Schmidt et al. (US 2024/0338554 A1, hereinafter referred to as Schmidt), and further in view of KLAFTER et al. (US 2025/0156634 A1, hereinafter referred to as KLAFTER), and Wunderlich et al. (US 11,431,664 B2, hereinafter referred to as Wunderlich), and GOODSITT et al. (US 2024/0193308 A1, hereinafter referred to as GOODSITT).
As to claim 4, which incorporates the rejection of claim 1, Lohiya, Schmidt, KLAFTER and Wunderlich fail to explicitly teach wherein the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
generate a plurality of synthetic target features based at least in part on historical client characteristics.
However, GOODSITT, in combination with Lohiya, Schmidt, KLAFTER and Wunderlich, teaches wherein the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
generate a plurality of synthetic target features based at least in part on historical client characteristics (paragraphs [0091]-[0093], input a second feature set to a synthetic generation model that produces a third synthetic feature).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, to add synthetic features to the system of Lohiya, Schmidt, KLAFTER and Wunderlich, as taught by GOODSITT, above. The modification would have been obvious because one of ordinary skill would be motivated to preserve the privacy of sensitive information stored on user devices, as suggested by GOODSITT, ([0091]).
As to claim 17, which incorporates the rejection of claim 15, Lohiya, Schmidt, KLAFTER and Wunderlich fail to explicitly teach:
generate a plurality of synthetic target features based at least in part on historical client characteristics.
However, GOODSITT, in combination with Lohiya, Schmidt, KLAFTER and Wunderlich, teaches wherein the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
generate a plurality of synthetic target features based at least in part on historical client characteristics (paragraphs [0091]-[0092], input a second feature set to a synthetic generation model that produces a third synthetic feature).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich to add synthetic features to the system of Lohiya, Schmidt, KLAFTER and Wunderlich, as taught by GOODSITT, above. The modification would have been obvious because one of ordinary skill would be motivated to preserve the privacy of sensitive information stored on user devices, as suggested by GOODSITT, ([0091]).
Claims 8 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over in view of Lohiya et al. (US 2025/0225375 A1, hereinafter referred to as Lohiya), in view of Schmidt et al. (US 2024/0338554 A1, hereinafter referred to as Schmidt), and further in view of KLAFTER et al. (US 2025/0156634 A1, hereinafter referred to as KLAFTER), and Wunderlich et al. (US 11,431,664 B2, hereinafter referred to as Wunderlich), and Jiang et al. (US 2024/0221029 A1, hereinafter referred to as Jiang), and Kumar et al. (US 2017/0061286 A 1, hereinafter referred to as Kumar).
As to claim 8, which incorporates the rejection of claim 1, Lohiya, Schmidt, KLAFTER and Wunderlich fail to explicitly teach wherein the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
receive an updated feedback user experience content dataset comprising interaction data from the new target client based at least in part on the visual representation of the unique content data objects presented to the new target client on the one or more user devices.
However, Kumar, in combination with Lohiya, Schmidt, KLAFTER and Wunderlich, teaches wherein the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
receive an updated feedback user experience content dataset comprising interaction data from the new target client based at least in part on the visual representation of the unique content data objects presented to the new target client on the one or more user devices (paragraphs [0066], users interacting with an application or browser accessing the item server 108 on a client device 114, filling out surveys, publicly known information about the user, etc.… ( 4) user feedback, such as comments, shares, likes, dislikes, favorites, actions, etc., and so forth.; [0116], presenting specially selected items to the user with the purpose of eliciting user feedback, whether negative (e.g., skipping, ignoring, rejecting, disapproving etc.) or positive (e.g., liking, sharing, purchasing, viewing, viewing in the entirety, etc.), which would maximize the information gained by the recommendation unit 104 about the users' preferences with as little user interaction as possible).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of Lohiya, De Ridder, KLAFTER and Wunderlich, to add user feedbacks to the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, as taught by Kumar, above. The modification would have been obvious because one of ordinary skill would be motivated to maximize the information gained by the recommendation unit 104 about the users' preferences, as suggested by Kumar, ([0116]).
As to claim 20, which incorporates the rejection of claim 15, Lohiya, Schmidt, KLAFTER and Wunderlich fail to explicitly teach
receiving an updated feedback user experience content dataset comprising interaction data from the new target client based at least in part on the visual representation of the unique content data objects presented to the new target client on the one or more user devices.
However, Kumar, in combination with Lohiya, Schmidt, KLAFTER and Wunderlich, teaches
receiving an updated feedback user experience content dataset comprising interaction data from the new target client based at least in part on the visual representation of the unique content data objects presented to the new target client on the one or more user devices (paragraphs [0066], users interacting with an application or browser accessing the item server 108 on a client device 114, filling out surveys, publicly known information about the user, etc.… ( 4) user feedback, such as comments, shares, likes, dislikes, favorites, actions, etc., and so forth.; [0116], presenting specially selected items to the user with the purpose of eliciting user feedback, whether negative (e.g., skipping, ignoring, rejecting, disapproving etc.) or positive (e.g., liking, sharing, purchasing, viewing, viewing in the entirety, etc.), which would maximize the information gained by the recommendation unit 104 about the users' preferences with as little user interaction as possible).
It would have been obvious to one of ordinary skill in the art before the effective filing of
the claimed invention to modify the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, to add user feedbacks to the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, as taught by Kumar, above. The modification would have been obvious because one of ordinary skill would be motivated to maximize the information gained by the recommendation unit 104 about the users' preferences, as suggested by Kumar,
([0116]).
Claim 10 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Lohiya et al. (US 2025/0225375 A1, hereinafter referred to as Lohiya), in view of Schmidt et al. (US 2024/0338554 A1, hereinafter referred to as Schmidt), and further in view of KLAFTER et al. (US 2025/0156634 A1, hereinafter referred to as KLAFTER), and Wunderlich et al. (US 11,431,664 B2, hereinafter referred to as Wunderlich), and Vadrevu et al. (US 2010/0293175 A1, hereinafter referred to as Vadrevu).
As to claim 10, which incorporates the rejection of claim 1, Lohiya, Schmidt, KLAFTER and Wunderlich fail to explicitly teach wherein the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
generate one or more screened exploratory feature sets by selecting a subset of exploratory feature sets of the plurality of exploratory feature sets based at least in part on the normalized exploratory feature set score; and
determine the plurality of feature labels of the dynamic framework feature set by selecting one or more screened set features from the one or more screened exploratory feature sets.
However, Vadrevu, in combination with Lohiya, Schmidt, KLAFTER and Wunderlich, teaches wherein the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
generate one or more screened exploratory feature sets by selecting a subset of exploratory feature sets of the plurality of exploratory feature sets based at least in part on the normalized exploratory feature set score (paragraphs [0057]- [0059], wherein using the broadest reasonable interpretation, Examiner interprets normalize the feature scores of a particular set of training data is rank normalization, in which the feature scores of a particular feature are first sorted and the ranks of the feature
scores are used to normalize the scores to teach the limitation.); and
determine the plurality of feature labels of the dynamic framework feature set by selecting one or more screened set features from the one or more screened exploratory feature sets (paragraphs [0053] and [0069]- [0071], wherein using the broadest reasonable interpretation, Examiner interprets “the distributions of the feature scores to conform to a uniform or Gaussian distribution for each feature of the training data, respectively” and the normalized feature scores 510 are then provided to universal ranking function 212 for ranking. Thus, the set of documents are ranked based on the documents' normalized feature scores to teach the limitation.).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, to add screened set features selection to the system of Lohiya, De Ridder, KLAFTER and Wunderlich, as taught by Vadrevu, above. The modification would have been obvious because one of ordinary skill would be motivated to normalize the feature scores of a particular data set, as suggested by Vadrevu, ([0057]).
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Lohiya et al. (US 2025/0225375 A1, hereinafter referred to as Lohiya), in view of Schmidt et al. (US 2024/0338554 A1, hereinafter referred to as Schmidt), and further in view of KLAFTER et al. (US 2025/0156634 A1, hereinafter referred to as KLAFTER), and Wunderlich et al. (US 11,431,664 B2, hereinafter referred to as Wunderlich), and Li et al. (“Feature Selection: A Data Perspective,” hereinafter referred to as Li).
As to claim 11, which incorporates the rejection of claim 1, Lohiya, Schmidt, KLAFTER and Wunderlich fail to explicitly teach wherein to determine the plurality of feature labels of the dynamic framework feature set, the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
select one or more exploratory set features of the plurality of exploratory feature sets based a correlation of exploratory set features between a subset of the plurality of exploratory feature sets.
However, Li, in combination with Lohiya, Schmidt, KLAFTER and Wunderlich, teaches wherein to determine the plurality of feature labels of the dynamic framework feature set, the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
select one or more exploratory set features of the plurality of exploratory feature sets based determining a correlation of exploratory set features between a subset of the plurality of exploratory feature sets (page 2 of 45…For example, in Figure 1(a), feature f1 is a relevant feature that is able to discriminate two classes (clusters). However, given feature f1, feature f2 in Figure 1(b) is redundant as f2 is strongly correlated with f1. In Figure 1(c), feature f3 is an irrelevant feature, as it cannot separate two classes (clusters) at all...).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, to add correlated feature sets to the system of Lohiya, De Ridder, KLAFTER and Wunderlich, as taught by Li above. The modification would have been obvious because one of ordinary skill would be motivated to strongly correlated feature sets to improve learning performance, increase computational efficiency, decrease memory storage, and build better generalization models., as suggested by Li (INTRODUCTION).
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over in view of Lohiya et al. (US 2025/0225375 A1, hereinafter referred to as Lohiya), in view of Schmidt et al. (US 2024/0338554 A1, hereinafter referred to as Schmidt), and further in view of KLAFTER et al. (US 2025/0156634 A1, hereinafter referred to as KLAFTER), and Wunderlich et al. (US 11,431,664 B2, hereinafter referred to as Wunderlich), and Khmaissia et al (US 2023/0132720 A1, hereinafter referred to as Khmaissia).
As to claim 12, which incorporates the rejection of claim 1, Lohiya, Schmidt, KLAFTER and Wunderlich fail to explicitly teach wherein the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
train a feature selection machine learning model based on the feedback user experience content dataset and the one or more content generation objectives; and
determine, using the feature selection machine learning model, one or more feature labels of the dynamic framework feature set from the one or more exploratory set features.
Khmaissia, in combination with Lohiya, Schmidt, KLAFTER and Wunderlich, teaches wherein the one or more storage devices store instructions that are operable, when executed by the one or more processors, to further cause the one or more processors to:
train a feature selection machine learning model based on the feedback user experience content dataset and the one or more content generation objectives (paragraph [0058], determining which document images to add to the training document images, the oracle accuracy may be used to train the feature selection model); and
determine, using the feature selection machine learning model, one or more feature labels of the dynamic framework feature set from the one or more exploratory set features (paragraph [0058, feature selection model selection model tries to find features that are more relevant to the oracle accuracy).
It would have been obvious to one of ordinary skill in the art before the effective filing of the claimed invention to modify the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich to add feature selection machine learning model training to the combination system of Lohiya, Schmidt, KLAFTER and Wunderlich, as taught by Khmaissia, above. The modification would have been obvious because one of ordinary skill would be motivated to find the subset of features that can model the extraction accuracy, as suggested by Khmaissia, ([0058]).
Examiner’s comments
For the record a complete prior art search was made for dependent claims 13-14. No art rejection is made for these claims, they are only rejected under 35 USC 101.
Response to Applicant’s arguments
Applicant's arguments on file on 08/08//2025 with respect to prior art rejection of claims 1-5, 8, 10-17, and 20-22 have been considered and are not persuasive for the 101 rejections.
I. Rejections under § 101
Applicant appears to assert that the present claims improve and are directed to improving machine learning and the underlying computing system itself as evidenced by both the novelty of the claims, the improvement of the machine learning process
itself, and the lack of the Office's identification of any mental or abstract equivalent of the recited process. Thus, the claims are eligible under the Alice/Mayo test under Step 2A. And in addition, the Applicant submits that the claim elements are not properly considered as a whole and the recited training process amounts to significantly more than any abstract idea.
Accordingly, at least for the foregoing reasons, Applicant respectfully submits that claims 1-5, 8, 10-17, and 20 -22 are not directed to an abstract idea. Therefore, Applicant respectfully requests withdrawal of the §IO I rejection and allowance of the pending claims.
Examiner's response:
Examiner respectfully disagrees. Applicant appears to assert that the present claims improve and are directed to improving machine learning and the underlying computing system itself as evidenced by both the novelty of the claims.
MPEP 2106.04(a) “Mental processes – concepts performed in the human mind (including an observation, evaluation, judgment, opinion).” See additionally see MPEP 2106.04(a)(2).
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
Examiner is interpreting the limitations as abstract ideas implemented on a generic computer. (Step 2A Prong 1).
The claim recites the limitation of “generate, using a [trained content generation reinforcement learning model] and based on a set of features, a content data object customized for a target client.” The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process.”
The claim recites the limitation of “generate, based at least in part on the feedback
user experience content dataset, a plurality of exploratory feature sets each comprising one or more target client characteristics of the user experience content dataset, wherein each of the plurality of exploratory feature sets is generated based on at least one different [feature generation model] applied to the feedback user experience content dataset.” The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of “generate a normalized exploratory feature set score for each exploratory feature set of the plurality of exploratory feature sets, wherein the normalized exploratory feature set score indicates a relative priority of each exploratory feature set relative to the plurality of exploratory feature sets based at least in part on one or more content generation objectives.“ The generate limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generate step from practically being performed in the human mind. This limitation is a mental process. receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of “determine a plurality of feature labels of a dynamic framework feature set based at least in part on the normalized exploratory feature set score.” The determine limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the determine step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of “determine an updated set of features for a new target client based on the client characteristics of the new target client and the plurality of feature labels of the dynamic framework feature set.” The determine limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the determine step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The claim recites the limitation of “generating, [using the retrained content generation reinforcement learning model] and based on the updated set of features, a unique content data object customized for the new target client.” The generating limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. That is nothing in the claim precludes the generating step from practically being performed in the human mind. This limitation is a mental process.
MPEP 2106.04(a)(2)(III)(C) “A Claim That Requires a Computer May Still Recite a Mental Process”
The newly added claim features do not improve the functionality of a computer or any technology.
The additional elements of “processors, storage devices, “content generation learning model is trained based at least in part on the dynamic framework feature set,” “retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model,” storage devices and user devices do not amount to significantly more for the reasons set forth in step 2A above.
Additionally, under the 2019 PEG, a conclusion that an additional element is insignificant extra-solution activity in Step 2A should be reevaluated in Step 2B.
Here the receive a feedback user experience content dataset comprising target client characteristics related to a plurality of target clients, and based on interaction data from the target client relative to the content data object” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by (MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.
Here the “transmit a visual representation of the unique content data objects to one or more user devices associated with the new target clients” step was considered to be extra-solution activity in Step 2A, and thus it is reevaluated in Step 2B to determine if it is more than what is well-understood, routine, conventional activity in the field. The addition of insignificant extra-solution activity does not amount to an inventive concept, particularly when the activity is well-understood or conventional (MPEP 2106.05(d)). This appears to be well-understood, routine, conventional as evidenced by (MPEP 2106.05(d)(II)(i)).
i. Receiving or transmitting data over a network, e.g., using the Internet to gather data.
The claim does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements of “processors, storage devices, “content generation learning model is trained based at least in part on the dynamic framework feature set,” “retrain the trained content generation reinforcement learning model using a training data set generated based on the plurality of feature labels of the dynamic framework feature set to generate a retrained content generation reinforcement learning model,” storage devices and user devices to perform the claim steps amount to no more than mere instructions to apply the exception using a generic computer component. Mere instructions to apply an exception using a generic computer component cannot provide an inventive concept. The claim is not patent eligible.
Therefore, Examiner respectfully maintains the 35 USC 101 rejection upon claim 1 as well as independent claims 15 and 21 as they recite substantially same limitations as amended claim 1.
For the dependent claims, no further arguments were presented, Examiner respectfully maintains the 35 USC 101 rejections upon them, due to their nature of dependence upon their respective independent claims.
IV. Prior Art Rejections of Independent Claims 1, 15, and 21
Applicant's arguments on file on 01/31//2025 with respect to prior art rejection of claims 1-5, 8, 10 - 17, and 20-22 have been considered and are persuasive.
Arguments
Applicant appears to assert that the Office Action relies on the '027 patent as allegedly teaching these limitations. As discussed above, Applicant respectfully submits that the '027 patent is not valid prior art under 35 U.S.C. § 102, thus, rendering the rejection moot and no primafacie rejection of the claims has been established.
Independent claims 15 and 21 include similar recitations and are patentable for
substantially the same reasons set forth above. Accordingly, for at least these reasons, Applicant respectfully submits that the Office Action has not established a prima facie rejection of independent claims 1, 15, and 21 and requests withdrawal of the same and issuance of a Notice of Allowance.
Examiner's response:
Applicant's arguments are moot in view of new ground(s) of rejections.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ABABACAR SECK whose telephone number is (571)270-7146. The examiner can normally be reached Monday-Friday 8:00 A.M.-6:00 P.M..
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Viker Lamardo can be reached on 5712705871. The fax phone number for the organization where this applicaEtion or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ABABACAR SECK/Examiner, Art Unit 2122
/VIKER A LAMARDO/Supervisory Patent Examiner, Art Unit 2147