Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Specification
The disclosure is objected to because of the following informalities:
In ¶83, “the certain metrics” should be ‘certain metrics’ as there does not appear to be a any prior reference to ‘metrics’ or ‘certain metric’, prior to ¶83.
Appropriate correction is required.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-12 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. The analysis of the claims will follow the 2019 Revised Patent Subject Matter Eligibility Guidelines (“2019 PEG”).
Claim 1
Step 1: The claim recites “a copy optimization tool comprising...”. Therefore, this claim is directed to the statutory category of a machine.
Step 2A, Prong 1: The claim recites, inter alia:
extracting a text input from a candidate copy content;…This limitation recites a mental process using observation, evaluation and judgment with aid of pen and paper in picking out and noting substantive words from written content... See MPEP 2106.04(a)(2)(III).
...accessing comparable historical copy content to generate historical comparison content;…This limitation recites a mental process using observation, evaluation, judgment with aid of pen and paper in reading other written text and writing down words to compare with candidate copy content…See MPEP 2106.04(a)(2)(III).
profiling a historical performance of the historical comparison content based on one or more of the following metrics: a variance in the historical performance of the comparison content; a frequency of use of a historical subject line or comparable copy content; and a recency of use of a historical subject line or comparable copy content;...This limitation recites a mathematical concept, specifically a mathematical calculation or relationship between on content variables and metrics See MPEP 2106.04(a)(2)(I).
matching the text input from the candidate copy content with the comparable copy content based on one or more matching algorithms;...This limitation recites a mathematical concept, specifically a mathematical formula/equation to perform matching. See MPEP 2106.04(a)(2)(I).
identifying historical matches of comparable copy content matching the text input of the candidate copy content;…This limitation recites a mental process using evaluation and judgment with aid of pen and paper in matching text... See MPEP 2106.04(a)(2)(III).
ranking the historical matches based on one or more of the following rules: a quality of a match; a volume of historical subject lines or comparable copy content a word count; and a recency or frequency of a historical subject line or comparable copy content; and...This limitation recites a mathematical concept, specifically using mathematical relationships between match values to rank the content matches... See MPEP 2106.04(a)(2)(I).
applying a weighting factor to the historical matches...This limitation recites a mathematical concept, specifically a mathematical relationship in assigning how much weight to give to matches... See MPEP 2106.04(a)(2)(I).
Step 2A, Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows:
A copy optimization tool comprising: one or more computer processors, configured to execute instructions programmed using a set of machine code, wherein the instructions, when executed by the one or more computer processors, cause the one or more computer processors to perform operations comprising:...This limitation is recited at a high level of generality and recites use of generic computer equipment to perform the abstract idea. Mere recitation that a judicial exception is to be performed using generic computer equipment in their ordinary capacity, cannot meaningfully integrate the judicial exception into a practical application. See MPEP 2106.05(f).
using a trained model,...This limitation is recited at a high level of generality and recites use of a general class of computer algorithms to perform the abstract idea. Mere recitation that a judicial exception is to be performed using generic computer equipment running general classes of computer algorithms in their ordinary capacity, cannot meaningfully integrate the judicial exception into a practical application. See MPEP 2106.05(f).
Step 2B: The additional elements, taken alone or in combination, do not represent significantly more than the abstract idea itself. As discussed above with respect to integration of the abstract idea into a practical application, the additional elements do not amount to significant more. Mere instructions to implement an abstract idea on a generic computer running generic classes of computer algorithms does not amount to significant more (MPEP 2106.05(f)). Additionally, general linking the abstract idea to a generic computer environment such as for marketing data copy optimization does not add significantly more to the abstract idea itself and does not add an inventive concept. “Generic computer functions to execute an abstract idea, even when limiting the use of the idea to one particular environment, does not add significantly more” (MPEP 2106.05(h)).Claim 2
Step 1: a machine, as above.
Step 2A, Prong 1: The claim recites, inter alia:
identifying a plurality of drivers in the extracted text ... based on the profiled historical performances.…This limitation recites a mental process using observation, evaluation and judgment with aid of pen and paper in finding key words that have shown to be most impactful in past content... See MPEP 2106.04(a)(2)(III).
Step 2A, Prong 2: This judicial exception is not integrated into a practical application. The additional elements of the claim are as follows:
using the trained model… This limitation is recited at a high level of generality and recites use of a general class of computer algorithms to perform the abstract idea. Mere recitation that a judicial exception is to be performed using generic computer equipment running general classes of computer algorithms in their ordinary capacity, cannot meaningfully integrate the judicial exception into a practical application. See MPEP 2106.05(f).
Corresponding analysis of parent claim maintained here.
Step 2B: Corresponding analysis of parent claim maintained here.
Claim 3
Step 1: a machine, as above.
Step 2A, Prong 1: The claim recites, inter alia:
the plurality of drivers include positive and negative drivers.…This limitation recites a mental process using observation, evaluation, judgment and opinion with aid of pen and paper in finding key words and whether they have positive or negative impacts... See MPEP 2106.04(a)(2)(III).
Step 2A, Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible.
Claim 4
Step 1: a machine, as above.
Step 2A, Prong 1: The claim recites, inter alia:
changing at least one of the negative drivers to a positive driver and...This limitation recites a mental process using evaluation and judgment with aid of pen and paper in labeling previously determine key word being a negative impact into one that is a positive impact... See MPEP 2106.04(a)(2)(III).
transmitting a modified candidate copy content having the changed driver...This limitation recites a mental process using evaluation and judgment with aid of pen and paper in writing a modified content with the changed label and handing the modified content off to another person…See MPEP 2106.04(a)(2)(III).
Step 2A, Prong 2 & Step 2B: There are no additional elements recited so the claim does not provide a practical application and is not considered to be significantly more. As such, the claim is patent ineligible.
Claims 5-8
Step 1: These claim are directed to “A method…”. Therefore, these claims are directed to the statutory category of a process.
Step 2A Prong 1: Claims 5-8 recite the same abstract ideas limitations as in claims 1-4, respectively, that were discussed above.
Step 2A, Prong 2 & Step 2B: There are no additional elements recited in these claims.
Corresponding analysis of claims 1-4 maintained here.
Claims 9-12
Step 1: These claim are directed to “A non-transitory computer-readable medium…”. Therefore, these claims are directed to the statutory category of a manufacture.
Step 2A Prong 1: Claims 9-12 recite the same abstract ideas limitations as in claims 1-4, respectively, that were discussed above.
Step 2A, Prong 2 & Step 2B: There are no additional elements recited in these claims.
Corresponding analysis of claims 1-4 maintained here.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-3, 5-7 and 9-11 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by A Framework for Optimizing Paper Matching to Charlin et al. (hereinafter Charlin).
Per claim 1, Charlin discloses A copy optimization tool (Abstract…A computational framework, e.g., a tool, for optimizing paper matching, hence the tool is construed as a copy optimization tool, “In this paper we propose a framework to optimize paper-to-reviewer assignments. Our framework uses suitability scores to measure pairwise affinity between papers and reviewers”) comprising:
one or more computer processors, configured to execute instructions programmed using a set of machine code, wherein the instructions, when executed by the one or more computer processors, cause the one or more computer processors (Section 3…This computational framework implements multiple machine learning models (LR, LM, BPMF) and optimization objectives, which are intrinsically computer-implemented software systems executed by processors, “Here we focus on three methods, each exploiting the different information available for prediction: a language model (LM); linear regression (LR); and Bayesian probabilistic matrix factorization (BPMF)”), to perform operations comprising:
extracting a text input from a candidate copy content (Section 1…processing submitted papers, e.g., candidate copy content, entail extracting features from text, “Assigning submitted papers to their most suitable reviewers is essential to the success of any conference, indeed to the functioning of many scientific fields, since it is reviewer assessments that determine the conference program and, to some extent, the shape of a discipline. However, this is not a simple task: large conferences often receive well over 1000 submissions that must be assigned to many hundreds of reviewers in short amount of time”; Section 3.1…These papers are represented as text, and features are extracted, such as word count vectors wp, “We may have access to additional side information about individual reviewers and papers which may come in different forms. In our setting, side information about submitted papers could include author-specified keywords, citations, and word usage in the paper…Our data sets also include an archive, containing a set of papers written by each reviewer, providing information about their expertise. This is represented as a word count vector wr summarizing r’s own papers. Similarly, we summarize each submitted paper p as a word count vector wp”, this being examples of extraction of a text input from a candidate item, e.g., paper);
using a trained model (Section 3.2…”we focus on three methods, each exploiting the different information available for prediction: a language model (LM); linear regression (LR); and Bayesian probabilistic matrix factorization (BPMF). LM uses the content of submitted papers and archived papers for prediction, but does not use reviewer bids; BPMF uses reported suitabilities/bids, but no document/archive side information; and LR uses bids and the content submissions, but not the archive”), accessing comparable historical copy content to generate historical comparison content (Section 3.2…the LM accesses archived papers written by reviewers, e.g., historical copy content, this archived content wr is used to build distributions over words for prediction (historical comparison content), “Our language model (LM) is based on these, and predicts suitabilities without using stated reviewer preferences; rather it builds a model in word (feature) space, assuming that distance in this space correlates with distance in suitability space. LM constructs a distribution over words for each reviewer, based on the archive of papers written by the reviewer wr (the reviewer side information)”);
profiling a historical performance of the historical comparison content (Section 3.1…profiling the historical relationship/affinity between reviewers and papers using suitability scores srp, which are based on preferences or expertise, “We formalize the matching problem as follows. Let r ϵ R refer to users or reviewers, p ϵ P to items or papers, and let |R| = N and |P| = M. Every user-item pair has a suitability score srp. The set of all scores can be viewed as a suitability matrix S…Only a subset of the suitabilities are observed, namely, those collected from reviewers during an elicitation process. Denote this by So, and denote the observed scores for a particular reviewer r and paper p by SrO, and denote the observed scores for a particular reviewer r and paper p by SrO and SpO, respectively”; Section 3.3 and Equations 3-7… the suitability scores srp reflect performance (suitability) which is analyzed and used for optimization, “We articulate several different criteria that may influence the definition of a “good” matching and explore different formulations of the optimization problem that can be used to accommodate these criteria... We formulate the basic matching problem as an IP, where each paper is assigned to its best-suited reviewers…”) based on one or more of the following metrics:
a variance in the historical performance of the comparison content (Section 4.3…express tracking of variance in IP (Integer Program) assignments, for example, the load balancing constraint aims to minimize within-reviewer variance (∑p(xrp−xˉ)2 / M) across assigned papers, “Load Balancing Balance IP…”);
a frequency of use of a historical subject line or comparable copy content (Section 3.2…The Language Model (LM) constructs distributions over words Pr(w|d) based on the frequency of word occurrences in content/documents, “LM constructs a distribution over words for each reviewer, based on the archive of papers written by the reviewer wr (the reviewer side information). The starting point for LM is a multinomial Pr(w|d) over words w in a document d. The maximum likelihood estimate of Pr(w|d) is the number of occurrences of this word divided by the total number of words in the document”; Section 3.1… The use of word count vectors wr and wp also captures frequency information of content, “Our data sets also include an archive, containing a set of papers written by each reviewer, providing information about their expertise. This is represented as a word count vector wr summarizing r’s own papers. Similarly, we summarize each submitted paper p as a word count vector wp”); and
a recency of use of a historical subject line or comparable copy content;
matching the text input from the candidate copy content with the comparable copy content based on one or more matching algorithms (Section 3.3…matching papers (text input) to reviewers (comparable content) thru using Integer Program (IP), which is a matching algorithm used to find the optimal assignment (xrp), “We formulate the basic matching problem as an IP, where each paper is assigned to its best-suited reviewers…The binary variable xrp encodes the matching of item p to user r; a match is an instantiation of these variables”);
identifying historical matches of comparable copy content matching the text input of the candidate copy content (Section 3.3…The output of the matching algorithm (Basic IP) is the set of binary variables xrp ∈ {0,1}, which encodes the matching (or historical assignment/match) of paper p to reviewer r, see Equations 3-5, “This IP, including constraints (5), is our basic formulation (Basic IP). Its solution, the optimal match, maximizes total reviewer suitability given the constraints”);
ranking the historical matches (Section 3.3…process involves finding an optimal match by maximizing suitability (Jbasic(x)) subject to constraints. This optimization process ranks potential assignments (historical matches) based on suitability scores (srp) to select the best possible set, see Equation 3) based on one or more of the following rules:
a quality of a match (Section 3.3…The quality of a match is directly represented by the suitability score (srp), where the objective function Jbasic(x) maximizes total suitability, confirming that quality is the primary rule for ranking/selection, see Equation 3);
a volume of historical subject lines or comparable copy content (Section 4.3…use of load balancing constraints (Pmin and Pmax) which restrict the volume (number) of papers assigned per reviewer, where volume is explicitly handled as a rule/constraint in the Balance IP objective function using a penalty function, see Equation 9, “Load Balancing Balance IP: The experiments above all constrain the number of papers per reviewer to be within a specific range (Pmin–Pmax). There is no good indication as to how to set these two extrema. Instead we now use the Balance IP, both for matching and evaluation (see Eq. 9), setting f to be the absolute value function”);
a word count (Section 3.1…Word count vectors (wp and wr) are used as key features in models like LR and LM to predict suitability, since suitability determines the ranking/matching, word count is an underlying rule/feature governing the outcome, “Our data sets also include an archive, containing a set of papers written by each reviewer, providing information about their expertise. This is represented as a word count vector wr summarizing r’s own papers. Similarly, we summarize each submitted paper p as a word count vector wp”); and
a recency or frequency of a historical subject line or comparable copy content (Section 3.2…The Language Model (LM) constructs distributions over words Pr(w|d) based on the frequency of word occurrences in content/documents, “LM constructs a distribution over words for each reviewer, based on the archive of papers written by the reviewer wr (the reviewer side information). The starting point for LM is a multinomial Pr(w|d) over words w in a document d. The maximum likelihood estimate of Pr(w|d) is the number of occurrences of this word divided by the total number of words in the document”); and
applying a weighting factor to the historical matches (Section 3.3…The objective function Jbasic(x) uses the suitability score (srp) as a weighting factor applied to the binary assignment xrp, see Equation 3; Section 3.3…In the Balance IP, see Equation 4, a parameter λ serves as a weighting factor to control the trade-off between suitability and load equity, “The parameter λ controls the tradeoff between load equity and match quality. The Jbalance objective (Eq. 6) along with the constraints expressed in Eq. 4 comprise our Balance IP”).
Per claim 2, Charlin discloses claim 1, further disclosing the operations further comprise: identifying a plurality of drivers in the extracted text (Section 3.1…The papers are summarized as word count vectors (wp), “…we summarize each submitted paper p as a word count vector wp”; Section 3.2…These word count vectors contain specific features (words) that are inputs to the learning models, so that when a model such as Linear Regression (LR), predicts suitability (s^rp) based on these features using s^rp=θr ⋅ wp, the individual words (features in wp) are the components, or "drivers," that determine the outcome, “Regression: Linear regression (LR) predicts suitabilities directly using the side information associated with the items. Each reviewer has a set of parameters θr, which is applied to item information wp to form an estimate of srp: s^rp=θr ⋅ wp”) using the trained model (Section 3.2…trained models such as the Language Model (LM) and Linear Regression (LR) to predict suitability scores, these models taking textual input features (wp) to perform prediction) based on the profiled historical performance (Section 3.2…The learning methods (e.g., LR) use stated reviewer preferences or observed suitability scores (So) as training observations, these observed suitabilities are the historical performance used to train the model, hence the relationship between the textual features (wp) and the performance (suitability srp) is learned and identified based on this profiled historical performance, see Equation 2).
Per claim 3, Charlin discloses claim 1, further disclosing the plurality of drivers include positive and negative drivers (Section 3.2…The drivers are the predictive features (words/keywords) identified by the trained model, such as Linear Regression (LR), based on historical performance (suitability scores), where the LR model predicts suitability s^rp using side information and the predicted score is calculated as : s^rp=θr ⋅ wp, with the LR model trained to minimize error (MSE), these learned weights (θr) will necessarily contain both positive coefficients (features that positively correlate with high suitability/performance) and negative coefficients (features that negatively correlate with high suitability/performance), see Equation 2, where the features corresponding to positive coefficients are positive drivers, and those corresponding to negative coefficients are negative drivers).
Claims 5-7 are substantially similar in scope and spirit to claims 1-3. Therefore, the rejections of claims 1-3 are applied accordingly.
Claims 9-11 are substantially similar in scope and spirit to claims 1-3. Therefore, the rejections of claims 1-3 are applied accordingly.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Patents and/or related publications are cited in the Notice of References Cited (Form PTO-892) attached to this action to further show the state of the art with respect to marketing copy optimization with automated driver identification.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALAN CHEN whose telephone number is (571) 272-4143. The examiner can normally be reached M-F 10-7.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kamran Afshar can be reached at (571) 272-7796. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ALAN CHEN/Primary Examiner, Art Unit 2125