DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/10/2025 has been entered.
Response to Arguments
Applicant’s arguments, see pages , filed 11/18/2025, with respect to the rejection(s) of claim(s) 1-3, 7 under 35 USC 103 have been fully considered but are moot because of a new ground of rejection.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claim(s) 1-3, 7, 9-11 and 15-17 are rejected under 35 U.S.C. 103 as being unpatentable over Duke (US-20190392487-A1) in view of Zhou (US-20170193545-A1) in view of Zhong (US-20210303638-A1) in further view of Singh (US-20180308003-A1).
Regarding Claim 1, Duke discloses a computer implemented method for recommending a header for a message comprising:
parsing previously used headers for messages registered in a historical complaints registry into a plurality of datasets based on at least one of a name of an entity, a brand of an entity (par.26, previous ads), (paragraph [0024], “(or other data-sets indicating the purpose or goals of each previous ads) are obtained and are transformed or converted (block 8) into feature vectors or feature lists that are suitable for use by Machine Learning (ML) algorithms.” and paragraph [0026], "the method of the present invention may analyze all the extracted data, as well as performance data of each previous ad; and may generate insights that indicate that previous ads that included a first particular combination of components had performed well, or that ads that included a second particular combination of components had performed poorly…" and paragraph [0030], "The title extraction component identifies and extracts the headers and sub-headers that are explicitly mentioned in the creative brief. It separates the brief text to an array or list or data-set of discrete sentences and phrases, and then uses text embeddings to identify the most similar sentences/phrases to the headers/sub-headers in the data set of previous ads." and paragraph [0091], “…and may even have access to other data of that user (e.g., full name, exact age or date-of-birth, email address, current location)… the system of the present invention may operate in real time to generate and to construct on-the-fly a particular digital ad unit that is specifically tailored to this specific end-user;” and paragraph [0092], "…some inputs/outputs 300 which may be utilized and/or generated in conjunction with automatic generation and construction of advertisements, in accordance with some demonstrative embodiments of the present invention. For example, existing ads 301 (or previous ads, or historic ads, or previously-used ads) as well as suitable Labels are fed into an AI engine 303; which in turn populates multiple databases, such as: a label database 311, a profile database 312, and a results database 313." (i.e., Parsing previous use of advertisements into multiple databases. The previous used ads are comprised with headers/sub-headers that were used to determine which ads performed well and other ads that performed poorly.));
training a machine learning model on the plurality of datasets to learn a decision tree classifier using the plurality of historical complaints datasets (paragraph [0025], "Then, Machine Learning (ML) algorithms or operations (block 9) create models (block 10) that generate the content and determine the position of ad elements based on the feature vectors as generated or provided by block 8. Different machine learning algorithms may be used for this purpose, for example Nearest Neighbors, Gradient Boosting Decision Trees… and/or other machine learning algorithms." and paragraph [0026], "the method of the present invention may analyze all the extracted data, as well as performance data of each previous ad; and may generate insights that indicate that previous ads that included a first particular combination of components had performed well, or that ads that included a second particular combination of components had performed poorly…" and paragraph [0028], "The machine learning models that were created or generated in the training phase, are used (block 12) to generate the textual elements of the desired new advertisement (e.g. headline, sub-header, call to action, or the like)…" (i.e., Training machine learning on the data of previous ads.));
receiving a request from an entity (par.41, client) for a header (par.30, title or headers/sub-headers) for a specific message (par.31, ad) (paragraph [0028], "The machine learning models that were created or generated in the training phase, are used (block 12) to generate the textual elements of the desired new advertisement (e.g. headline, sub-header, call to action, or the like)…" and paragraph [0030], "The title extraction component identifies and extracts the headers and sub-headers that are explicitly mentioned in the creative brief. It separates the brief text to an array or list or data-set of discrete sentences and phrases, and then uses text embeddings to identify the most similar sentences/phrases to the headers/sub-headers in the data set of previous ads." and paragraph [0031], "For the title generation, a variety of approaches may be used; such as, text summarization, using a recurrent neural network (RNN) to learn the mapping between an ad description and its title, which may be deployed where sufficient data exists." and paragraph [0041], "In the second step, an Ad Creative Brief Obtaining Unit 205 of the system obtains the creative brief (e.g., from the client)." (i.e., User enters information in the "creative brief" in order to get an advertisement with title, header/sub-header.));
training the machine learning model on at least one of a name of the requesting entity, a brand of the requesting entity, (paragraph [0065], and Fig.6, "An Ad Construction Unit 240 of the system is now ready to generate an ad or series of ads for Chris. For example, the ad creative brief (for a new desired ad) is transformed or converted into feature vectors that are suitable for use by machine learning algorithms. Different algorithms may be used to create these feature vectors; for example, feature encoding, TF-IDF, word embedding, and/or other algorithms. The machine learning models that were created in the training phase, and the NLG models which have been trained on an advertising specific corpus (or that utilize an advertising-related or marketing-related dictionary or data-set or word-bank, for training and/or for various methods of generating natural language phrases), are used to generate the textual elements of the advertisement (e.g., headline, sub-header, call to action) based on the feature vectors provided by the ad creative brief of the new desired ad." and paragraph [0091], “…and may even have access to other data of that user (e.g., full name, exact age or date-of-birth, email address, current location)… the system of the present invention may operate in real time to generate and to construct on-the-fly a particular digital ad unit that is specifically tailored to this specific end-user;” (i.e., Training the machine learning to learn on the specific brand such as for "Chris" who wants to create an ad as shown in Fig.6, and training to include specific header/sub-header based on the Creative brief input.));
and generating at least one proposed header in response to the request (paragraph [0067], "The system may now generate multiple different ads, optionally accompanied with a set of data based justifications next to each ad indicating why the system generated each such ad. Optionally, wording or content of an ad component (in the newly-proposed ad) may be selectively modified by a reviewing user, and multiple variations are then automatically re-generated by the system based on such introduced changes." and paragraph [0095], Fig.6, "Reference is made to FIG. 6, which is a schematic illustration demonstrating an automatically-generated creative (ad) 600, in accordance with some demonstrative embodiments of the present invention. Demonstrated are, for example, selection/generation and placement of the Headline; selection/generation and placement of the Sub-Head or Sub-Headline;" (i.e., Generating multiple ads.)).
However, Duke does not explicitly disclose parsing previously used headers for messages registered in a historical complaints registry based on a wrong association with into a plurality of datasets based on at least one of
Zhou discloses parsing previously used headers for messages registered in a historical complaints registry based on a wrong association with into a plurality of datasets based on at least one of (paragraph [0044], "the predictive model implemented by the quality model circuitry 204 is trained and/or retrained. For example, at an initial setup stage, the predictive model may be trained with historical advertisements of known bad quality (e.g., as indicated by user feedback received indicating the particular advertisement is offensive or annoying). Periodically, or upon receipt of one or more indications that an instance of sponsored content is of poor quality, the predictive model can be re-trained with the newly received data and/or the poor quality sponsored content." (i.e., Modifying Duke system to include data on ads users complained that were offensive or annoying and implementing of retraining the machine learning with the user complaint of the ads. “Wrongly associated” is reading as the ad message was ineffective with the intended message.)).
Duke and Zhou are considered to be analogous to the claimed invention because they are both using previous data of messages such as ads that include headers in order to provide users with quality messages using machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified Duke to implement the method of Zhou in order to improve advertisement quality to provide to users because having a machine learning trained on bad advertisement will teach which ads are low performing and avoid providing users with bad advertisements or bad messages (Zhou, paragraph [0051], “To improve the effectiveness of native ads, an ad serving systems should provide ads that satisfy an end user's needs according to two aspects: relevance and quality. Relevance is the extent to which an ad matches user interests. Relevant ads should be personalized according to the target user preferences, browsing patterns, search behavior, etc. Quality, however, is a unique characteristic of the ad or sponsored content itself, and can be independent of the individual users targeted by the platform.”).
However, Duke in view of Zhou do not explicitly disclose training a machine learning model on the plurality of datasets to learn a decision tree classifier using the plurality of historical complaints datasets; training the machine learning model on at least one of
Zhong discloses training the machine learning model on at least one of (par.49, input string) (paragraph [0050], "Entity type 238 indicates a named entity associated with input string 230." and paragraph [0051], "Entity type 238 may also…be identified based on analysis of input string 230 and/or the context in which input string 230 was obtained." and paragraph [0054], "A model-creation apparatus 210 trains embedding model 208 to generate embeddings that reflect semantic relationships between standardized entities 232 and user-generated input strings (e.g., input string 230)." (i.e., Training the machine learning to reflect relationship between user input and the standardized entities which data in 202 as shown in Fig.2.)).
Duke in view of Zhou and Zhong are considered to be analogous to the claimed invention because they are in the same field of handling natural language data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified Duke to implement the method of Zhong for more efficient, scalable manner to match input string with standardized entities such as Duke previous ads as described in paragraph 16 in order to provide more relevant information for the user and that can reduce searching for a previous ad in order to create a new ad (Zhong, paragraph [0022], “Moreover, standardized identities may be identified as semantically similar to the input strings by searching hierarchical clusters of embeddings of the standardized entities, which allows large numbers of input strings to be matched to standardized entities with large numbers of possible values in an efficient, scalable manner (e.g., using efficient graph traversal techniques instead of brute force calculation of distances between the input string embeddings and all standardized entity embeddings).”).
However, Duke in view of Zhou in further view of Zhong do not explicitly disclose training a machine learning model on the plurality of datasets to learn a decision tree classifier using the plurality of historical complaints datasets; classifying the at least one proposed header using the trained machine learning model based on the decision tree classifier; and selecting at least one of the at least one proposed header based on the classification.
Singh discloses training a machine learning model on the plurality of datasets to learn a decision tree classifier using the plurality of historical complaints datasets (paragraph [0199], Fig.14, "At step 1402, the processor sends each set of similarity metrics to each tree in the random forest. In this example, there are three trees, A, B, and C. " (i.e., Duke discloses a Gradient Boosting Decision Trees, examiner is relying on Singh to expand the use of the decision trees.));
classifying the at least one proposed header using the trained machine learning model based on the decision tree classifier (paragraph [0200], Fig.14, "At steps 1404A-C, trees A through C produce their respective classification for the similarity metrics. The respective classification may differ for each tree." (i.e., The multiple ads that were generated are classified.));
and selecting at least one of the at least one proposed header based on the classification (paragraph [0202], Fig.14, "At step 1408, the forest returns the classification and similarity score. In this way, the random forest produces a classification and similarity score using a collection of trained random trees when presented with an input in the form of similarity metrics." (i.e., The output of the classification from each tree is then combined and feed is given back to the Duke par.67.)).
Duke in view of Zhou in further view of Zhong and Singh are considered to be analogous to the claimed invention because they are in the same field of handling natural language data. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified Duke to implement the method of Singh (Singh, Fig.14) as to improve input string recognition and by using the method (Singh, Fig.14) it can provide an accurate answer while the input data such as Duke Ad modification as described in par.67 not being fully complete (Singh, paragraph [0043], “The machine learning model can be trained and used to provide a similarity score for any number of strings stored in memory relative to a possibly incomplete input string. The training of the model can use iterative techniques that optimize the predicted result based on a set of training data for which the result is known. The similarity scores can be used to identify a corresponding string stored in memory.” and paragraph [0180], “The random forest model is a collection of random decision trees. Similarity metrics are passed to each of the trees, and each tree is capable of producing a classification result. In the case of the present invention, this result can be whether or not the incomplete input string matches one of the filtered strings.”).
Regarding Claim 2, Duke in view of Zhou in view of Zhong in further view of Singh discloses all the limitations as claim 1.
Duke further discloses wherein the at least one proposed header is generated using a machine learning model trained based on the at least one of a name of the entity (paragraph [0091], “…and may even have access to other data of that user (e.g., full name, exact age or date-of-birth, email address, current location)… the system of the present invention may operate in real time to generate and to construct on-the-fly a particular digital ad unit that is specifically tailored to this specific end-user;” (i.e., as explained in rejection of claim 1, a machine learning can create ads based on the user name.)),
a brand of the entity (paragraph [0065], and Fig.6, "An Ad Construction Unit 240 of the system is now ready to generate an ad or series of ads for Chris. For example, the ad creative brief (for a new desired ad) is transformed or converted into feature vectors that are suitable for use by machine learning algorithms. Different algorithms may be used to create these feature vectors; for example, feature encoding, TF-IDF, word embedding, and/or other algorithms. The machine learning models that were created in the training phase, and the NLG models which have been trained on an advertising specific corpus (or that utilize an advertising-related or marketing-related dictionary or data-set or word-bank, for training and/or for various methods of generating natural language phrases), are used to generate the textual elements of the advertisement (e.g., headline, sub-header, call to action) based on the feature vectors provided by the ad creative brief of the new desired ad." and paragraph [0070], “the system may enable micro-targeted and personalized level of ad generation for each new ad or for each brand or client;” (i.e., examiner reading at least one as options that are not all required and thus not given patentable weight.)),
Regarding Claim 3, Duke in view of Zhou in view of Zhong in further view of Singh discloses all the limitations as claim 1.
Zhong further discloses further comprising including in the at least one recommended header at least one character to identify meta-data configured to be used during at least one of a scrubbing process (paragraph [0058], Fig.2:10 "Model-creation apparatus 210 then uses a training technique (e.g., gradient descent and backpropagation), a loss function (e.g., cross entropy), and/or one or more hyperparameters to update parameter values of embedding model 208 in a way that reduces the error between the output of embedding model 208 and the corresponding labels 214." (i.e., modifying the parameters to reduce error of output and corresponding labels.)).
The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified Duke to implement the method of Zhong training technique in order to provide users with more accurate outputs that will reduce error (Zhong, par.58) between label and output such as improving Duke search as described in paragraph 16.
Regarding Claim 7, Duke in view of Zhou in view of Zhong in further view of Singh discloses all the limitations as claim 1.
Zhong further discloses further comprising approving the at least one recommended header based on a dissimilarity variance with distributed registered headers (paragraph [0089], Fig.4:408, "Embedding match scores are calculated between the input string and standardized entities represented by the subset of embeddings and/or retrieved from the inverted index based on distances between embeddings of the standardized entities and the first embedding in the vector space (operation 408).").
The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference.
Regarding Claim 9, Duke in view of Zhou in view of Zhong in further view of Singh discloses all the limitations as claim 1.
Duke further discloses further comprising training the machine learning model to learn a threshold (paragraph [0105], “wherein said set of advertisement elements is generated based on analysis of said input and detection that said advertisement elements correspond to previous performance results that are beyond a pre-defined threshold;”).
Zhou further discloses further comprising training the machine learning model to learn a threshold wherein headers having a classification greater than the threshold are not recommended as having a high probability of being wrongly associated with the requesting entity (paragraph [0038], “If the Boolean filtering circuitry 218 determines that the quality metric exceeds the quality threshold, the filtering circuitry 202 may effect provision of the first sponsored content to a second client device (e.g., the user device 120) for display thereon (410). For example, if the quality metric exceeds the quality threshold,” and paragraph [0042], Fig.6:604, " The terms “exceeds a quality threshold” or “higher quality metric” are used herein. However, in one particular implementation, the quality metric is a predictor of the quality of the sponsored content in that it is a probability that the processed sponsored content is of bad quality (e.g., a higher score indicates lower quality for the sponsored content). Thus, in this implementation, although the quality “exceeds” a threshold, the numerical value of the quality metric would operate in inverse and would be below the numerical quality threshold to indicate higher quality sponsored content. Accordingly, upon filtering by the filtering circuitry 202, sponsored content of lesser quality is served with lower priority, is served only when needed, or is altogether prohibited from being served." (i.e., although Zhou uses exceeding threshold as recommending, par.42 states that threshold can be reversed and exceeding the threshold can mean indicating as wrongly associated.))
and headers having a classification lower than the threshold are recommended as having a high probability of not being wrongly associated with the requesting entity (paragraph [0040], “if the quality metric does not exceed the quality threshold, then the Boolean filtering circuitry 218 may flag or designate the received sponsored content as of lower quality or of unsuitable quality for displaying,” and paragraph [0042], Fig.6:604, " The terms “exceeds a quality threshold” or “higher quality metric” are used herein. However, in one particular implementation, the quality metric is a predictor of the quality of the sponsored content in that it is a probability that the processed sponsored content is of bad quality (e.g., a higher score indicates lower quality for the sponsored content). Thus, in this implementation, although the quality “exceeds” a threshold, the numerical value of the quality metric would operate in inverse and would be below the numerical quality threshold to indicate higher quality sponsored content. Accordingly, upon filtering by the filtering circuitry 202, sponsored content of lesser quality is served with lower priority, is served only when needed, or is altogether prohibited from being served." (i.e., same explanation as stated for exceeding a threshold, therefore, below a threshold means low risk of bad content, and the ad would be recommended.)).
The proposed combination as well as the motivations for combining the references presented in the rejection of the parent claim apply to this claim and are incorporated herein by reference.
Regarding Claim 10, which is similar in scope to claim 1, thus rejected under the same rationale. Examiner notes Duke discloses a system (Duke, paragraph [0127], “Some implementations may utilize an automated method or automated process, or a machine-implemented method or process, or as a semi-automated or partially-automated method or process, or as a set of steps or operations which may be executed or performed by a computer or machine or system or other device.”).
Regarding Claim 11, which is similar in scope to claim 2, thus rejected under the same rationale.
Regarding Claim 15, which is similar in scope to claim 9, thus rejected under the same rationale.
Regarding Claim 16, which is similar in scope to claim 1, thus rejected under the same rationale. Examiner notes Duke discloses computer readable medium (Duke, paragraph [0126], “Some embodiments may include a non-transitory storage medium or storage article having stored thereon instructions or code that, when executed by a machine or a hardware processor, cause such machine or hardware processor to perform a method as described.”).
Regarding Claim 17, which is similar in scope to claim 2, thus rejected under the same rationale.
Claim(s) 8 is rejected under 35 U.S.C. 103 as being unpatentable over Duke (US-20190392487-A1) in view of Zhou (US-20170193545-A1) in view of Zhong (US-20210303638-A1) in view of Singh (US-20180308003-A1) in further view of Baracaldo (US-20200358599-A1).
Regarding Claim 8, Duke in view of Zhou in view of Zhong in further view of Singh discloses all the limitations as claim 1.
However, Duke in view of Zhou in view of Zhong in further view of Singh do not disclose further comprising approving the at least one recommended header using a distributed computation protocol while preserving privacy of the entity across a network of message operators.
Baracaldo discloses further comprising approving the at least one recommended header using a distributed computation protocol while preserving privacy of the entity across a network of message operators (paragraph [0039], "the term “privacy of computation” can refer to preserving data privacy within the computation of an algorithm…For instance, privacy of computation can be achieved using one or more secure multiparty computation protocols, which can allow N parties (e.g., wherein “N” is the number of parties) to obtain the output of a function over their N inputs while preventing knowledge of anything other than the output." paragraph [0048], Fig.1:108, and "the aggregator component 108 can implement a data privacy scheme within the federated learning environment facilitated by the system 100 that can ensure privacy of computation, privacy of outputs, and/or trust amongst participating parties." (i.e., applicant distributed computation protocol is similar to multiparty computation protocol)).
Duke in view of Zhou in view of Zhong in further view of Singh and Baracaldo are considered to be analogous to the claimed invention because they are in the same field machine learning. Therefore, it would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention to have modified Duke to include multiparty computation protocols to preserve the privacy of users or participating parties when training the machine learning and protecting against extraction (Baracaldo Angel, paragraph [0100], “Advantageously, various embodiments described herein can combine one or more differential privacy processes and/or secure multiparty computations within a federated learning environment to improve the accuracy of machine learning models while preserving one or more privacy guarantees and/or protecting against extraction and/or collusion threats.”).
Allowable Subject Matter
Claims 4-6, objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Although there were no specific arguments made in the rejection for claims 4-6,12-14 and 18-20 against Coulombe on NFAO 05/15/2025, examiner points to claims 4 as allowable as Duke, Zhou, Zhong, Singh, and Coulombe do not disclose “approving or disapproving the specific header based on the classification; in a case where the specific header is approved, receiving a selection of one of the recommended header and the approved specific header from the entity; and registering to the entity the selected header for the message.” in combination with other limitations of the dependent claim 4. Similar explanation to claims 12 and 18.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Erkin S. Abdullaev whose telephone number is (571)272-4135. The examiner can normally be reached Monday - Friday - 8:00 am - 5:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Wesley Kim can be reached at (571)272-7867. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
ERKIN S. ABDULLAEV
Examiner
Art Unit 2648
/ERKIN ABDULLAEV/Examiner, Art Unit 2648
/WESLEY L KIM/Supervisory Patent Examiner, Art Unit 2648