DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This final office action is in response to the amendment filed 22 December 2025.
Claims 1-5, 7-13, 15-19, and 21-23. Claims 1, 8, and 15 are independent claims. Claims 6, 14, and 20 are cancelled. Claims 21-23 are newly added.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 8-10, and 15-17, and 21-22 are rejected under 35 U.S.C. 103 as being unpatentable over Downey et al. (US 2007/0288950, published 13 December 2007, hereafter Downey) and further in view of Sinha et al. (US 11,238,363, filed 27 April 2017, hereafter Sinha).
As per independent claim 1, Downey discloses a computer-implemented method for composite classification of a plurality of classification inputs, the computer-implemented method comprising:
identifying a plurality of classification features, wherein a classification feature of the plurality of classification features is associated with a plurality of classification feature values (paragraphs 0080 and 0089: Here, metadata is used to storing audience classification parameters (paragraph 0080). These classification parameters (classification features) may include values to define the audience, including demographics, psychographics, geography, and audience size (paragraph 0089))
identifying a plurality of initial classes, wherein an initial class of the plurality of initial classes comprises a per-initial-class input subset of a plurality of classification inputs that are associated with a respective classification feature value for a respective classification feature (paragraphs 0094-0095: Here, for each classification feature a highest vote value is identified (paragraph 0095). These highest vote values serve as a default for determining content to serve, as it encompasses the largest group of the universe of users. This highest vote value serves as the initial-class input subset for the plurality of classification inputs)
determining a plurality of composite classification scenarios (paragraph 0095: Here, based upon the classification parameters of a user deviating from those associated with the highest vote values, different content is served that targets a user based upon their associated classification parameters), wherein:
(i) a composite classification scenario of the plurality of composite classification scenarios is associated with a set of classification feature combinations selected from the plurality of classification features (paragraphs 0093-0095: Here, a flotilla matrix of asset options is generated. This flotilla matrix includes assets identifying a plurality target audiences based upon the classification inputs associated with the user. Based upon the composite classification inputs associated with the user, an asset matching that target audience is selected and presented. This flotilla matrix is analogous to the claimed n-sized classification feature combination having a plurality of classification scenarios)
(ii) the composite classification scenario is associated with a set of per-scenario initial class subsets of the plurality of initial classes each associated with a corresponding classification feature in the set of classification feature combinations (paragraphs 0093-0095: Here, the flotilla matrix of asset options includes a number of per-scenario default settings (initial classes) for each classification feature in the flotilla. For each classification feature a highest vote value is identified (paragraph 0095). These highest vote values serve as a default for determining content to serve, as it encompasses the largest group of the universe of users. This highest vote value serves as the initial-class input subset for the plurality of classification inputs. Based upon the available scenarios that more closely match the audience classification information, these assets are presented to the specific segment instead of the initial classified assets)
(iii) the composite classification scenario is associated with a plurality of per-scenario composite classes (paragraphs 0093-0095: Here, the flotilla matrix of asset options includes a number of per-scenario default settings (initial classes) for each classification feature in the flotilla. For each classification feature a highest vote value is identified (paragraph 0095). These highest vote values serve as a default for determining content to serve, as it encompasses the largest group of the universe of users. This highest vote value serves as the initial-class input subset for the plurality of classification inputs. Based upon the available scenarios that more closely match the audience classification information, these assets are presented to the specific segment instead of the initial classified assets)
inputting the plurality of classification scenarios to receive a top set of composite classification scenarios comprising a subset of the plurality of composite classification scenarios by (paragraph 0095: Here, a set of asset options and associated metadata are provided to the UED. The UED selects which of the available options to deliver to a user based upon a comparison of the current audience classification parameter values to the metadata associated with each of the asset options. One of the asset option sets, such as the one comprising the highest vote values, may be inserted into the channel for displaying to the user):
(i) determining a plurality of composite classification scenario scores corresponding to the plurality of composite classification scenarios (paragraph 0098: Here, a “goodness of fit” is determined based upon a comparison of the audience classification to the target audience. This factors in the plurality of classification features associated with each target audience for the asset and the classification features associated with users receiving the asset)
(ii) determining the top set of composite classification scenarios based at least in part on the plurality of composite classification scenario scores (paragraph 0098: Here, a “goodness of fit” is determined based upon a comparison of the audience classification to the target audience. This factors in the plurality of classification features associated with each target audience for the asset and the classification features associated with users receiving the asset)
initiating performance of one or more classification-based actions based at least in part on the top set of composite classification scenarios (paragraphs 0100-0101: Here, the asset is inserted into the particular time-slot for a particular network and delivered to the targeted end user)
Downey fails to specifically disclose:
using one or more processors
inputting the plurality of classification scenarios to a machine learning model to receive a top set of composite classification scenarios comprising a subset of the plurality of composite classification scenarios
However, Sinha, which is analogous to the claimed invention because it is directed toward classifying based on machine learning models, discloses:
using one or more processors (Figure 3, item 320: Here, a device (item 300) includes a processor (item 320))
inputting the plurality of classification scenarios to a machine learning model to receive a top set of composite classification scenarios comprising a subset of the plurality of composite classification scenarios (Figures 1G and 4; column 4, line 55- column 5, line 7: Here, a score is generated by the machine learning model based upon positive and negative entries associated with a set of modules (column 3, line 63- column 4, line 16). Based upon a set of terms a model is generated for classifying unclassified data (column 4, lines 17-44). This unclassified data set is then scored and these scores are provided to the user (Figure 4, item 470))
It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Sinha with Downey, with a reasonable expectation of success, as it would have allowed for using a deep neural network to perform user classification in order to provide a user a ranked list of scores in order to facilitate performance of actions (Sinha: Figures 1G and 4; column 4, line 55- column 5, line 7).
As per dependent claim 2, Downey and Sinha disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Downey discloses wherein:
(i) a per-scenario composite class of the plurality of per-scenario composite classes is associated with a set of per-composite-class initial class subsets of the plurality of initial classes comprising a subset of initial classes selected from a distinct per-scenario subset for the composite classification scenario (paragraphs 0093-0095: Here, the flotilla matrix of asset options includes a number of per-scenario default settings (initial classes) for each classification feature in the flotilla. For each classification feature a highest vote value is identified (paragraph 0095). These highest vote values serve as a default for determining content to serve, as it encompasses the largest group of the universe of users. This highest vote value serves as the initial-class input subset for the plurality of classification inputs. Based upon the available scenarios that more closely match the audience classification information, these assets are presented to the specific segment instead of the initial classified assets)
(ii) a composite classification scenario score is determined for the composite classification scenario based at least in part on a per-scenario composite class for the composite classification scenario (paragraphs 0154 and 0158-0160: Here, matching criteria are used to identify assets for providing to users. This includes a combination of one or more matching criteria (paragraph 0158) where each constraint is associated with positive weighting factors (paragraph 0159) and negative weighting factors (paragraph 0160). The combination of all positive and negative weighting factors provides the composite score for determining the goodness of the fit (paragraph 0098))
(iii) determining the composite classification scenario score for the composite classification scenario comprises:
determining a plurality of per-composite-class cost measures corresponding to a plurality of per-scenario composite classes associated with the composite classification scenario based at least in part on each per-input cost measure for a per-composite class input subset of the plurality of classification inputs that are associated with the set of per-composite-class initial class subsets for the per-scenario composite class (paragraphs 0154 and 0158-0160: Here, matching criteria are used to identify assets for providing to users. This includes a combination of one or more matching criteria (paragraph 0158) where each constraint is associated with positive weighting factors (paragraph 0159) and negative weighting factors (paragraph 0160). The combination of all positive and negative weighting factors provides the composite score for determining the goodness of the fit (paragraph 0098))
determining, based at least in part on the plurality of per-composite-class cost measures, the composite classification scenario score for the composite classification scenario (paragraphs 0154 and 0158-0160: Here, matching criteria are used to identify assets for providing to users. This includes a combination of one or more matching criteria (paragraph 0158) where each constraint is associated with positive weighting factors (paragraph 0159) and negative weighting factors (paragraph 0160). The combination of all positive and negative weighting factors provides the composite score for determining the goodness of the fit (paragraph 0098))
As per dependent claim 3, Downey and Sinha disclose the limitations similar to those in claim 2, and the same rejection is incorporated herein. Downey discloses wherein determining a per-composite-class cost measure of the plurality of per-composite-class cost measures for a per-scenario composite class of the plurality of per-scenario composite classes that is associated with the composite classification scenario comprises:
determining a plurality of per-input individual cost measure (paragraphs 0158-0160: Here, a per-input weighting factor is disclosed. This provides a positive/negative weighting factor for each match)
determining a per-composite-class composite cost measure for the per-scenario composite class based at least in part on a largest per-input individual cost measure for the per-composite-class input subset that is associated with the particular per-scenario composite class (paragraphs 0158-0160: Here, a composite value is generated from the weighting factors associated with each keyword. This includes the summing of the values from the positive and negative weighting factors. This summation includes at least one item which is the “largest per-input individual cost measure” for the set of matching keywords)
determining the per-composite-class cost measure for the per-scenario composite class based in part on the per-composite-class composite cost measure (paragraphs 0158-0160).
As per claim 8-10, the applicant discloses the limitations substantially similar to those in claims 1-3, respectively. Claims 8-10 are similarly rejected.
As per claim 15-17, the applicant discloses the limitations substantially similar to those in claims 1-3, respectively. Claims 15-17 are similarly rejected.
As per dependent claim 21, Downy and Sinha disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Sinha further discloses wherein:
(i) the machine learning model is trained using a set of training data entries (Figures 1B and 4; column 3, lines 37-62: Here, a sampling is performed to identify positive and negative entities in a training data set)
(ii) a training data entry of the set of training data entries comprise (a) a per-composite-class cost measure for a per-scenario composite classes associated with a training composite classification scenario (Figure 4, item 450: Here, a model is generated based upon the positive and negative entities. This includes a set of priority terms (Figure 4, item 430) and ancillary terms (Figure 4, item 450)) and (b) a ground-truth-label indicating whether the training composite classification scenario is one of the top set of composite classification scenarios among the set of training data entries (Figures 1A and 1B: Here, the positive and negative entities are ground truth labels indicating whether an item is a positive or negative for classify unclassified entities (column 4, lines 29-56))
It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Sinha with Downey, with a reasonable expectation of success, as it would have allowed for using a deep neural network to perform user classification in order to provide a user a ranked list of scores in order to facilitate performance of actions (Sinha: Figures 1G and 4; column 4, line 55- column 5, line 7).
As per dependent claim 22, Downey and Sinha disclose the limitations similar to those in claim 1, and the same rejection is incorporated herein. Sinha further discloses wherein the one or more classification-based actions comprises at least: rendering, via a user interface, predictive inference metadata associated with the top set of composite classification scenarios (Figure 1G: Here, the analytics platform provides, to a user device, information that identifies respective entities above a threshold and their respective classification scores and information (column 4, line 57- column 5, line 7)).
Claims 4-5, 7, 11-12, and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Downey and Sinha and further in view of Elenbaas et al. (US 2005/0028194, published 3 February 2005, hereafter Elenbaas).
As per dependent claim 4, Downey and Sinha disclose the limitations similar to those in claim 3, and the same rejection is incorporated herein. Downey discloses wherein determining a per-input-individual cost measure of the plurality of per-input individual cost measure for a classification input of the plurality of classification inputs in the per-composite-class input subset that is associated with particular per-scenario composite class comprises:
determining a per-input initial cost measure for the particular classification input (paragraphs 0158-0160)
determining the per-input individual cost measure for the particular classification input based at least in part on the per input initial cost measure (paragraphs 0158-0160)
Downey fails to specifically disclose determining a per-input adjustment factor for the particular classification input and determining the per-input individual cost measure for the particular classification input based at least in part on the per-input adjustment factor.
However, Elenbaas, which is analogous to the claimed invention because it is directed toward classifying data using weights and adjustment factors, discloses determining a per-input adjustment factor for the particular classification input and determining the per-input individual cost measure for the particular classification input based at least in part on the per-input adjustment factor (paragraph 0031: Here, a classification is performed based upon a count of a number of keywords. A ranking is performed based upon weights associated with user’s preferred anchor and/or broadcast channel. These weights may be adjusted based upon an adjustment factor). It would have been obvious to one or ordinary skill in the art at the time of the applicant’s effective filing date to have combined Elenbaas with Downey-Sinha, with a reasonable expectation of success, as it would have allowed for adjusting weights in order to improve the classification (Elenbaas: paragraph 0031).
As per dependent claim 5, Downey, Sinha, and Elenbaas disclose the limitations similar to those in claim 4, and the same rejection is incorporated herein. Downey discloses wherein determining the per-composite-class cost measure for the per-scenario composite class based at least in part on the per-composite-class composite cost measure comprises:
determining a plurality of per-input composite cost measure corresponding to the plurality of classification inputs in the per-composite-class input subset that is associated with the per-scenario composite class, based at least in part on the per-composite-class composite cost measure for the per-scenario composite class (paragraphs 0158-0160: Here, a per-input weighting factor is disclosed. This provides a positive/negative weighting factor for each match. These positive/negative weighting factors associated with each weight is a per-input cost measure. The combination of each of these per-input cost measures is the per-composite-class composite cost measure)
determining the per-composite-class cost measure based at least in part on the plurality of per-input composite cost measure (paragraphs 0158-0160: Here, these cost measures is based upon the summation of the positive and negative weights)
Downey fails to specifically disclose determining a per-input composite cost measure based at least in part on the per-input adjustment factor for the classification input. However, Elenbaas discloses determining a per-input composite cost measure based at least in part on the per-input adjustment factor for the classification input (paragraph 0031: Here, a classification is performed based upon a count of a number of keywords. A ranking is performed based upon weights associated with user’s preferred anchor and/or broadcast channel. These weights may be adjusted based upon an adjustment factor). It would have been obvious to one or ordinary skill in the art at the time of the applicant’s effective filing date to have combined Elenbaas with Downey-Sinha, with a reasonable expectation of success, as it would have allowed for adjusting weights in order to improve the classification (Elenbaas: paragraph 0031).
As per dependent claim 7, Downey, Sinha, and Elenbaas disclose the limitations similar to those in claim 4, and the same rejection is incorporated herein. Downey discloses determining, based at least in part on input feature data associated with the classification input and using a cost determination associated with a target cost category, a per-input initial cost measure (paragraphs 0158-0160).
Downey fails to specifically disclose a machine learning model. However, Sinha discloses determining, using the one or more processors and a composite classification scenario scoring machine learning model (Figures 1G and 4; column 4, line 55- column 5, line 7: Here, a score is generated by the machine learning model based upon positive and negative entries associated with a set of modules (column 3, line 63- column 4, line 16). Based upon a set of terms a model is generated for classifying unclassified data (column 4, lines 17-44). This unclassified data set is then scored and these scores are provided to the user (Figure 4, item 470))
It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Sinha with Downey, with a reasonable expectation of success, as it would have allowed for using a deep neural network to perform user classification in order to provide a user a ranked list of scores in order to facilitate performance of actions (Sinha: Figures 1G and 4; column 4, line 55- column 5, line 7).
With respect to claims 11-12, the applicant discloses the limitations substantially similar to those in claims 4-5, respectively. Claims 11-12 are similarly rejected.
With respect to claims 18-19, the applicant discloses the limitations substantially similar to those in claims 4-5, respectively. Claims 18-19 are similarly rejected.
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Downey, Sinha, and Elenbaas and further in view of Foster et al. (US 2003/0214950, hereafter Foster).
As per dependent claim 13, Downey, Sinha, and Elenbaas disclose the limitations similar to those in claim 12, and the same rejection is incorporated herein. Downey fails to specifically disclose wherein:
the per-input adjustment factor is determined based at least in part on an adjustment feature having an adjustment feature range
each classification input is associated with an adjustment feature value for the adjustment feature
the per-input adjustment factor for classification inputs having a smallest adjustment feature value in the adjustment feature range are associated with an initial adjustment factor
However, Elenbaas discloses each classification input is associated with an adjustment feature value for the adjustment feature (paragraph 0031: Here, a classification is performed based upon a count of a number of keywords. A ranking is performed based upon weights associated with user’s preferred anchor and/or broadcast channel. These weights may be adjusted based upon an adjustment factor). It would have been obvious to one or ordinary skill in the art at the time of the applicant’s effective filing date to have combined Elenbaas with Downey-Sinha, with a reasonable expectation of success, as it would have allowed for adjusting weights in order to improve the classification (Elenbaas: paragraph 0031).
Additionally, Foster, which is analogous to the claimed invention because it is directed toward match rates, discloses:
the per-input adjustment factor is determined based at least in part on an adjustment feature having an adjustment feature range (paragraph 0027: Here, an adjustment factor range is specified to improve the match rate)
the per-input adjustment factor for classification inputs having a smallest adjustment feature value in the adjustment feature range are associated with an initial adjustment factor (Figure 3; paragraphs 0033-0034: Here, in instances where the match rate is minimized in the event that additional factors, such as “Wait Time,” merit adjustment. Otherwise the adjustment is not applied)
It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Foster with Downey-Sinha-Elenbaas, with a reasonable expectation of success, as it would have allowed for performing adjustments based upon parameters (Foster: paragraphs 0033-0034). This would have allowed for application of adjustment parameters in order to improve serving of contents.
Claim 23 is rejected under 35 U.S.C. 103 as being unpatentable over Downey and Sinha and further in view of Greenewald et al. (US 2021/0089878, published 25 March 2021, hereafter Greenwald).
As per dependent claim 23, Downey and Sinha disclose the limitations similar to those in claim 2, and the same rejection is incorporated herein. Downey fails to specifically disclose wherein the machine learning model comprises an aggregation layer configured to aggregate the plurality of per-composite-class cost measures.
However, Greenewald, which is analogous to the claimed invention because it is directed toward a machine learning model, discloses wherein the machine learning model comprises an aggregation layer configured to aggregate the plurality of cost measures (Figure 3; paragraph 0060: Here, neurons of the machine learning neural network are merged to form an aggregation layer. These neurons are associated with weight vectors used in the cost matrix to determine matching). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s effective filing date to have combined Greenewald with Downey-Sinha, with a reasonable expectation of success, as it would have allowed for merging layers via aggregation to determine costs (Greenewald: paragraph 0060).
Response to Arguments
Applicant’s arguments have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Downy and Sinha.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure:
Strimel et al. (US 12205574): Discloses a machine learning model that computes costs based upon a cost aggregation layer (column 9, lines 53-58)
Smolyanskiy et al. (US 2019/0295282): Discloses an architecture using aggregator of the neural network (paragraphs 0036 and 0038)
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KYLE R STORK whose telephone number is (571)272-4130. The examiner can normally be reached 8am - 2pm; 4pm - 6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Omar Fernandez Rivas can be reached at 571/272-2589. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KYLE R STORK/Primary Examiner, Art Unit 2128