DETAILED ACTION
Status of Claims
This office action is in response to the amendment filed on 2/5/2026.
Claims 1 and 8 have been amended.
Claims 1, 3-6, and 8-11 are pending have been examined.
Priority
The disclosure claims priority from United States Provisional Patent Application No. 63/252,354 filed October 5, 2021, which is hereby incorporated by reference.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 3-6, 8-11 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1: Claims 1, 3-6, 8-11 are directed to a method. Thus, on their face they fall within the four statutory categories of patentable subject matter.
Step 2A prong 1:
The following limitations, when considered individually and as an ordered combination, are merely descriptive of abstract concepts:
Claim 1:
generating an initial page for the product for purchase based on a long tail query phrase, the long term query phrase generated by appending a product name to a cluster input;
processing unstructured text from different sources via open topic modelling to generate selected text based on the long tail query phrase, the unstructured text retrieved via sources;
performing topic embedding on the processed unstructured text;
inserting the selected text to the initial page based on the long tail query phrase;
associating selected text with products; and
obtaining or purchasing ads based on the generated initial page and linking the ads to the dynamic landing page such that selecting the ads directs the user to the dynamic landing page
The following dependent claim limitations, when considered individually and as an ordered combination, are merely further descriptive of abstract concepts:
Claim 3:
receiving the cluster input;
associating consumer products associated with the long tail query phrase; and
validating the long tail query phrase.
Claim 4:
generating at least one of a traffic or product quality score for the query phrase.
Claim 5:
further comprising, before receiving the cluster input, retrieving unstructured or structured text associated with the product for purchase;
processing the unstructured or structured text to generate a product for purchase topic;
retrieving other topics associated with the product for purchase topic;
aligning unstructured or structured text with the product for purchase; and
naming a combination of the unstructured or structured text, the product for purchase, the product for purchase topic and the topics associated with the product for purchase topic as the cluster input.
Claim 6:
further comprising storing the combination of the unstructured or structured text, the product for purchase, the product for purchase topic and the topics associated with the product for purchase topic as the cluster input.
Claim 8:
further comprising, before generating an initial page based on a query phrase: tracking keywords associated with the product for purchase, where tracking keywords includes performing an initial validation of the keywords;
generating at least one long tail query phrase in a form of a set of landing page candidates.
Claim 9:
wherein tracking keywords associated with the product for purchase comprises: scraping search result pages.
Claim 10:
comprising: performing at least one further validation of the keywords if an initial validation does not meet a predetermined level.
Claim 11:
wherein performing at least one further validation of the keywords comprises at least one of performing a validation based on keyword volume; performing a validation based on a level of traffic the keywords have on search engines; performing a validation based on keyword relevance; or performing a validation based on keyword deduplication
The claims provide a manner of generating a page by adding a long tail query with a product and using topic modeling to include selected text into a page and obtaining ads based on the initial page for display within a dynamic page. Thus, when considered individually and as an ordered combination, the claims embody certain methods of organizing human activity. Specifically, such activity is in the form of commercial interactions (in the form of advertising, marketing or sales activities or behaviors).
Further, but for the recitation of a generic computing device the operations of generating the initial page, processing text, performing topic embedding, inserting text, associating selected text, and obtaining ads can be performed by a human analog either mentally or with pen and paper. Thus, the claims are also considered a mental process.
Step 2A prong 2: This judicial exception is not integrated into a practical application. The claims recite the following additional elements: initial website page (claim 1, 8); processing unit (claim 1); electronically retrieving via online sources (claim 1); dynamic website landing page (claim 1); website landing page candidate (claim 8); website search result pages (claim 9); clicking on ads directs a user to the dynamic website landing page (claim 1);
The processing unit is recited at a high level of generality and merely applies the abstract idea using a generic computing device. The computing device is used to process data (generating) and send and receive data (receiving). Nothing in the claims improves upon computer themselves, computer technology, or a technical field (See MPEP 2106.05(f)).
The initial website page, electronically retrieving via online sources, dynamic website landing page, website landing page candidate, and website search result pages merely provide a general link to a particular technological environment in which to practice the abstract idea. Nothing in the claims improves upon website technology, electronically retrieving information from online sources, or a technical field (See MPEP 2106.05(h)).
Further, clicking on ads directs a user to the dynamic website landing page is recited at a high level of generality and amount to mere computer implementation. Nothing in the claims improves click throughs of online advertisement technology or a technical field (See MPEP 2106.05(f)).
Accordingly, when considered both individually and as an ordered combination, the additional elements do not impose any meaningful limits on practicing the abstract idea.
Step 2B: The claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception. Similarly, as above with regard to practical application, the additional elements when considered both individually and as an ordered combination, do not provide an inventive concept as they merely provide generic computing components used as a tool to implement the abstract idea and provide a general link to a particular technological environment or field of use (i.e. online).
Further, clicking on ads directs a user to the dynamic website landing page is well understood, routine, or conventional at the time of the claimed invention (Hopwood – (US 2013/0124300) – “Other conventional advertisements allow for user interaction by clicking on or clicking "through" the ad to an off-page link or other destination, where data and information related to the advertisement are presented.” – 2013; Herrmann et al (US 7,406,508) – “This form is useful for supporting conventional functionality such that when the user clicks on the advertisement 22, the web browser directs the user to a second web page containing the associated advertising information.” – 2008;)
As a result, the claims are not patent eligible.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status:
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under pre-AIA 35 U.S.C. 103(a) are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 3-6, and 8-9 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. 9110977 (“Pierre”) in view of U.S. Pat. 10965812 (“Das”) in view of Misra et al (US 2021/0182912) in view of Gopinath et al (US 2011/0066497)
As per claims 1 Pierre discloses, method of generating a dynamic website landing page for a product for purchase for e-commerce comprising (“Computing device(s) 610 are communicatively connected through network 600 to data sources 603 and 605 and to web servers 620 that host web sites in target online domain 625. For illustration purposes only, data sources 603 and 605 are shown in FIG. 6 as being separate from target domain 625; however, in various embodiments and operational contexts, some or all of data sources 603 and 605 may be hosted on servers within the target domain 625”) (Col. 22 Ln. 13-20):
generating an initial website page, via a processing unit, for the product for purchase based on a long tail query phrase, the long term query phrase generated by appending a product name to a cluster input (“the techniques described herein are able to address, in real or near real-time, a much larger segment of the long-tail query search market and to produce landing web pages and other web content that rank well in search engine results and are responsive and more relevant to web searches performed by users. Since the techniques described herein autonomously and automatically produce web pages that are highly relevant and responsive to long-tail queries, which tend to be highly specific with multiple keywords, any contextual advertising that is displayed on or associated with these web pages can also be more relevant to the information sought by the users, thereby leading to significant monetization opportunities”) (Col. 3 Ln. 9-21, Col. 23 Ln 43-52, Col. 6 Ln. 28-41, Examiner’s note: teaches providing a dynamic or personalized landing page based on a user search for various information like cheap hotels in London that can include ads);
processing unstructured text from different online sources via open topic modelling to generate selected text based on the long tail query phrase, the unstructured text electronically retrieved via online sources (The underline limitation is taught by Das below. Examiner interprets that the related content that is additionally placed in the outputted webpage based on determinations like linguistics to long tail queries and target topic terms and both the automatic title and the determined content are placed on the webpage that is returned to the user) (“The linguistic database stores tagged and indexed representations of structured and unstructured data sources relevant to the target topic terms, where the data sources may include, but are not limited to, news articles, web pages, blog posts, social media comments, etc. The output from the linguistic discovery process is a set of relevant linguistic structures. The output linguistic structures include text information that is highly tagged with identifiers according to, without limitation, orthographic, lexical, syntactic, grammatical and semantic features and relationships. In some embodiments text information may be tagged for, without limitation, recognized entities such as geographic locations, persons, and organizations, and source metadata such as source name, author and publication date. The linguistic structures returned by the linguistic discovery process can describe many different facets of knowledge related to target topic terms including, but not limited to: which topic terms represent concepts, entities, persons, locations, products, organizations, dates, etc.”) (Col. 11 Ln. 47-65, Col. 21 ln. 35-55);
inserting the selected text to the initial website page based on the long tail query phrase (i.e. the related content that is additionally placed in the outputted webpage based on determinations like linguistics to long tail queries and target topic terms) (“obtaining one or more topic terms; automatically acquiring a set of information that is related to the one or more topic terms; automatically performing linguistic analysis on the set of information to determine a set of linguistic structures that are represented in the set of information; automatically expanding the set of linguistic structures utilizing semantic queries; automatically using the set of linguistic structures to create a set of content items that are responsive to searches that include the one or more topic terms; automatically generating one or more web pages that include the set of content items; and publishing the one or more web pages in one or more online domains”) (Col. 3 Ln. 25-36),
associating selected text with products (i.e. the related content that is additionally placed in the outputted webpage based on determinations like linguistics to long tail queries and target topic terms) (“Web page 502 is autonomously and automatically generated according to the techniques described herein. For example, the set of target topic terms "all inclusive vacation resorts" is automatically determined by analyzing data retrieved from various data sources. Then, a set of information that is related to the target topic terms is automatically acquired. Linguistic analysis on the acquired set of information is automatically performed to determine a set of linguistic structures that are represented in the set of information. The set of linguistic structures is then used to automatically create the content items in content blocks 506-512 and the natural language title 504. Web page 502 is then automatically generated to include natural language title 504 and content blocks 506-512” and “Entity category identifier field 406 is configured for storing an identifier that identifies an entity linguistic category which corresponds to the type of entity that the data element stored in field 402 references in the portion of text from which the data element is extracted. Examples of entity linguistic categories include, but are not limited to, a person, a product, a material object, a location, a company, a place, a thing, and any other category or classification that may be used to describe real-world entities”) (Col. 21 Ln. 40-53 and Col. 9 Ln. 22-30, Col. 6 Ln. 28-41);
and obtaining or purchasing ads based on the generated website initial page (Examiner’s note: automatic generation of natural language title facilitates generating revenue through webpage advertisements) (“Content block 510 includes several content items that are natural language sentences tagged as "href" links that point to other automatically generated web pages with content that is similar or closely related to the target topic terms--e.g., such as content related to all inclusive vacation resorts at specific locations. Content block 512 includes several content items that are natural language sentences tagged as "href" links that point to other automatically generated web pages with content that is somewhat loosely related to the target topic terms--e.g., such as content related to vacation packages and all inclusive family vacations”) (Col. 21 Ln. 57- Col. 22 Ln. 9)
Pierre specifically doesn’t disclose, processing unstructured open topic modelling to generate selected text based on the long tail query phrase, however Das discloses, processing unstructured open topic modelling to generate selected text based on the long tail query phrase (i.e. long tail query phrase is associated with the computer text transcript that is more than one or two words) (“LDA algorithm executed by the topic modelling module 106a. As shown in FIG. 4, the topic modelling module 106a receives as input 402 the unstructured computer text from the historical voice call transcripts and determines a number of topics (K) based upon the input. Then, as described above in FIG. 3, the module 106a assigns the words to one or more of the topics 404. FIG. 5 is a diagram of an exemplary output of the LDA algorithm executed by the topic modelling module 106a” and “the historical voice call transcripts can each be associated with one or more outcomes—e.g., product purchase, website interaction, account opening, no transactions, etc.—determined by the server computing device 106. The topic flow categorization module 106b can analyze the outcome(s) and categorize each historical topic flow based upon the outcome(s). As set forth above, a transaction can be considered as positive (e.g., because it resulted in either income or a further customer interaction) or negative including the case of no transactions”) (Col. 12, Ln 22-31 and Col. 14, Ln 1-6).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention to generating an initial page for the product for purchase based on a long tail query phrase, the long term query phrase generated by appending a product name to a cluster input, the unstructured text electronically retrieved via online sources, inserting the selected text to the initial page based on the long tail query phrase, associating selected text with products, and obtaining ads based on the generated initial page, as disclosed by Pierre, processing unstructured open topic modelling to generate selected text based on the long tail query phrase, as taught by Das for the purpose to train the classification model to predict what topic should come next in a given topic flow to either maintain the positive categorization or increase the likelihood of a positive categorization result.
Pierre in view of Das specifically doesn’t disclose, performing topic embedding on the processed unstructured data, however Misra discloses, performing topic embedding on the processed unstructured data (“A corpus of documents can be created using some or all available content. In one embodiment, preprocessing can be applied such as employing Natural Language Processing (NLP) processes to tokenize, remove stop words, create bigram and/or trigram models, and/or to apply name entity recognition analysis. Continuing with this example, the labels can be generated by implementing a number of steps. For example, several Latent Dirichlet Allocation (LDA) topic models can be fit or otherwise applied to the corpus, where the LDA topic models have varying numbers of topics. Each of the models can be compared using a coherence model to find the topic models that best fit the data which indicates or otherwise determines the number of classes and labels (e.g., microgenres) applicable to the textual corpus of documents. The tokenized corpus can be converted into a vector (e.g., a 300 dimension vector, although other sizes of dimensions can also be utilized) using a pre-trained word embedding algorithm or model (e.g., a GloVe word embedding) in order to cluster the words into similar meaning groups. Feature sets can then be generated from the clusters of words for each of the classes that had been identified from the topic model.”) (Paragraph [0023] see also [0058])
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention to perform topic embedding on the processed unstructured data, as disclosed by Misra, with the dynamic webpages of Pierre in view of Das in order to cluster the words into similar meaning groups ([0023]) and for selection of targeted advertisements ([0025]).
Pierre in view of Das in view of Misra does not expressly disclose linking the ads to the dynamic website landing page such that clicking the ads directs a user to the dynamic website landing page, however, Gopinath teaches linking the ads to the dynamic website landing page such that clicking the ads directs a user to the dynamic website landing page ([0081] In both cases, when a user clicks on (or mouses over) the advertisement 275 (or a particular item in the advertisement 275), a personalized dynamic landing page (e.g., as hosted by the advertiser's server) is generated to provide consumers with an extended array of personalized product recommendations consistent with the advertisement 275 being clicked.)
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention to linking the ads to the dynamic website landing page such that clicking the ads directs a user to the dynamic website landing page, as disclosed by Gopinath, with the dynamic webpages of Pierre inn view of Das in view of Misra in order to extend an array of personalized product recommendations consistent with the advertisement being clicked ([0081])
As per claims 3, Pierre discloses, wherein generating an initial website page, via a processing unit, for the product for purchase based on a long tail query phrase comprises: receiving the cluster input (“the techniques described herein are able to address, in real or near real-time, a much larger segment of the long-tail query search market and to produce landing web pages and other web content that rank well in search engine results and are responsive and more relevant to web searches performed by users. Since the techniques described herein autonomously and automatically produce web pages that are highly relevant and responsive to long-tail queries, which tend to be highly specific with multiple keywords, any contextual advertising that is displayed on or associated with these web pages can also be more relevant to the information sought by the users, thereby leading to significant monetization opportunities”) (Col. 3 Ln. 9-21, Col. 23 Ln 43-52, Col. 6 Ln. 28-41, Examiner’s note: teaches providing a dynamic or personalized landing page based on a user search for various information like cheap hotels in London that can include ads);
associating consumer products associated with the long tail query phrase (“when a user submits to a search engine a long-tail query with the target topic terms, the user is likely to find the generated web site very near the top of the rankings that are returned (e.g., top 10 results) by the search engine in the search results”) (Col. 23 Ln. 48-52);
and validating the long tail query phrase (i.e. The system further may track the information generated and use it as feedback for statistical modeling (e.g. validation)) (“publishing module 13 is also configured to generate information that is provided to tracking module 14. Tracking module 14 is logic configured to track user behavior and to determine what people are clicking on, what they like, etc., by using the information received from publishing module 13 as well as information received or retrieved from web sites on which the content generated by content creation and publishing logic 616 is posted. Tracking module 14 generates tracking information and provides the tracking information as feedback 16 to demand analysis logic 614. Future demand prediction module 1 uses the feedback tracking information to enhance the statistical modeling for demand prediction and forecasting for the various topics that are determined based on the information retrieved from data sources 603”) (Col. 7 Ln. 65 - Col. 8 Ln. 13).
As per claims 4, Pierre discloses, generating at least one of a traffic or product quality score for the query phrase (i.e. demand analysis logic can generate a probabilistic model for expected revenue that can be used to score and rank the above topic terms i.e. associated with generating traffic score) (“By using advertising CPC information, demand analysis logic 614 can then determine if there is a demand for advertising--e.g., if various companies are trying to buy ads--on web pages having the above topic terms. Based on this information, demand analysis logic 614 (or a component thereof, such as expected revenue module 5) can generate a probabilistic model for expected revenue that can be used to score and rank the above topic terms and to store this information in topic database 6. Thereafter, demand analysis logic 614 can use the information in topic database 6 to determine that the above topic terms are a good candidate for which web content should be automatically created and published”) (Col. 5 Ln. 65 - Col. 6 Ln. 9) and (i.e. product quality i.e. associated with quality content) (“create quality content for a much wider range of target search topics (ultimately billions) than would be possible with human-based approaches. Thus, the techniques described herein are able to address, in real or near real-time, a much larger segment of the long-tail query search market and to produce landing web pages and other web content that rank well in search engine results and are responsive and more relevant to web searches performed by users.”) (Col. 3 Ln. 7-14, Col. 7 Ln. 35-45).
As per claims 5, Pierre discloses, further comprising, before receiving the cluster input, retrieving unstructured or structured text associated with the product for purchase (i.e. demand analysis logic can generate a probabilistic model for expected revenue that can be used to score and rank the above topic terms i.e. associated with generating traffic score) (“"topic term" refers to one or more words, phrases, keywords, or any other structured or unstructured portions of text that can be used by a user to search for information on a given topic. As used herein, "automatic" and "automatically" means that the referenced functionality is performed by a computing device without receiving direct input from a user and not in response to user input, and "autonomous" and "autonomously" means that the referenced functionality is performed by the computing device without being controlled by a human person”) (Col. 3 Ln. 50-58) and (i.e. product quality i.e. associated with quality content) (“The linguistic database stores tagged and indexed representations of structured and unstructured data sources relevant to the target topic terms, where the data sources may include, but are not limited to, news articles, web pages, blog posts, social media comments, etc.”) (Col. 11 Ln. 47-51);
processing the unstructured or structured text to generate a product for purchase topic (“page assembler 112 may automatically generate new content 113 in the form of markup language document(s) that comprise a web page or in the form of output data in a format that is suitable for rendering in particular target medium such as, for example, a mobile application and/or a web service that supports mobile application(s), a social media website, a social network feed, a content management system, etc.”) (Col. 16 Ln. 24-31);
retrieving other topics associated with the product for purchase topic (i.e. topics determined from this automatic demand analysis include "all inclusive vacation resorts", "where is a cheap hotel in London?", and "inexpensive hotels London" which recites a product for purchase topic) (“content creation process may be topic terms that include multiple keywords such as "inexpensive hotels London", alternative variants that express the same information need such as "London hotels cheap", or a natural language query such as "Where is a cheap hotel in London?". The output of the content creation process may be a web page that includes a map that maps a list of hotel names and addresses of cheap and inexpensive hotels in London that are responsive to the topic terms”) (Col. 6 Ln. 33-41, Col. 3 Ln. 43-52);
aligning unstructured or structured text with the product for purchase (i.e. topics determined from this automatic demand analysis include "all inclusive vacation resorts", "where is a cheap hotel in London?", and "inexpensive hotels London" which recites a product for purchase topic) (“page assembler 112 may automatically generate new content 113 in the form of markup language document(s) that comprise a web page or in the form of output data in a format that is suitable for rendering in particular target medium such as, for example, a mobile application and/or a web service that supports mobile application(s), a social media website, a social network feed, a content management system, etc.”) (Col. 16 Ln. 24-31);
and naming a combination of the unstructured or structured text (“Part-of-speech category identifier field 404 is configured for storing an identifier that identifies a part-of-speech linguistic category which corresponds to how the data element stored in field 402 is used in the portion of text from which the data element is extracted. Examples of part-of-speech linguistic categories include, but are not limited to, a proper name category, a verb group category, a determiner category, a noun category, a prepositional category, and a data context category”) (Col. 9 Ln. 13-21), the product for purchase, the product for purchase topic and the topics associated with the product for purchase topic as the cluster input (i.e. topics determined from this automatic demand analysis include "all inclusive vacation resorts", "where is a cheap hotel in London?", and "inexpensive hotels London" which recites a product for purchase topic) (“Part-of-speech category identifier field 404 is configured for storing an identifier that identifies a part-of-speech linguistic category which corresponds to how the data element stored in field 402 is used in the portion of text from which the data element is extracted. Examples of part-of-speech linguistic categories include, but are not limited to, a proper name category, a verb group category, a determiner category, a noun category, a prepositional category, and a data context category”) (Col. 9 Ln. 13-21, Col. 3 Ln. 43-52).
As per claims 6, Pierre discloses, further comprising storing the combination of the unstructured or structured text, the product for purchase, the product for purchase topic and the topics associated with the product for purchase topic as the cluster input (i.e. demand analysis logic can generate a probabilistic model for expected revenue that can be used to score and rank the above topic terms i.e. associated with generating traffic score) (“By using advertising CPC information, demand analysis logic 614 can then determine if there is a demand for advertising--e.g., if various companies are trying to buy ads--on web pages having the above topic terms. Based on this information, demand analysis logic 614 (or a component thereof, such as expected revenue module 5) can generate a probabilistic model for expected revenue that can be used to score and rank the above topic terms and to store this information in topic database 6. Thereafter, demand analysis logic 614 can use the information in topic database 6 to determine that the above topic terms are a good candidate for which web content should be automatically created and published”) (Col. 5 Ln. 65 - Col. 6 Ln. 9) and (i.e. product quality i.e. associated with quality content) (“create quality content for a much wider range of target search topics (ultimately billions) than would be possible with human-based approaches. Thus, the techniques described herein are able to address, in real or near real-time, a much larger segment of the long-tail query search market and to produce landing web pages and other web content that rank well in search engine results and are responsive and more relevant to web searches performed by users.”) (Col. 3 Ln. 7-14, Col. 7 Ln. 35-45).
As per claims 8, Pierre discloses, further comprising, before generating an initial validation website page based on a query phrase: tracking keywords associated with the product for purchase, where tracking keywords includes performing an initial of the keywords (i.e. The system further may track the information generated and use it as feedback for statistical modeling (e.g. validation)) (“publishing module 13 is also configured to generate information that is provided to tracking module 14. Tracking module 14 is logic configured to track user behavior and to determine what people are clicking on, what they like, etc., by using the information received from publishing module 13 as well as information received or retrieved from web sites on which the content generated by content creation and publishing logic 616 is posted. Tracking module 14 generates tracking information and provides the tracking information as feedback 16 to demand analysis logic 614. Future demand prediction module 1 uses the feedback tracking information to enhance the statistical modeling for demand prediction and forecasting for the various topics that are determined based on the information retrieved from data sources 603”) (Col. 7 Ln. 65 - Col. 8 Ln. 13, Col. 6 Ln. 33-51);
generating at least one long tail query phrase in a form of a set of website landing page candidates (“Tracking module 14 is logic configured to track user behavior and to determine what people are clicking on, what they like, etc., by using the information received from publishing module 13 as well as information received or retrieved from web sites on which the content generated by content creation and publishing logic 616 is posted. Tracking module 14 generates tracking information and provides the tracking information as feedback 16 to demand analysis logic 614. Future demand prediction module 1 uses the feedback tracking information to enhance the statistical modeling for demand prediction and forecasting for the various topics that are determined based on the information retrieved from data sources 603.”) (Col. 7 Ln. 67- Col. 8 Ln. 12).
As per claims 9, Pierre discloses, wherein tracking keywords associated with the product for purchase comprises: scraping website search result pages (i.e. The system further may track the information generated and use it as feedback for statistical modeling (e.g. validation)) (“publishing module 13 is also configured to generate information that is provided to tracking module 14. Tracking module 14 is logic configured to track user behavior and to determine what people are clicking on, what they like, etc., by using the information received from publishing module 13 as well as information received or retrieved from web sites on which the content generated by content creation and publishing logic 616 is posted. Tracking module 14 generates tracking information and provides the tracking information as feedback 16 to demand analysis logic 614. Future demand prediction module 1 uses the feedback tracking information to enhance the statistical modeling for demand prediction and forecasting for the various topics that are determined based on the information retrieved from data sources 603”) (Col. 1 Ln. 23-29, Col. 2 Ln. 57-67).
Claim(s) 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pat. 9110977 (“Pierre”) in view of U.S. Pat. 10965812 (“Das”) in view of Misra et al (US 2021/0182912) in view of Gopinath et al (US 2011/0066497) further in view of Myslinksi (US Pat. 8,990,234).
As per claims 10, Pierre discloses, performing at least one further validation of the keywords if an initial validation does not meet a predetermined level (The underline limitation is taught by secondary reference Mylinksi) (i.e. The system further may track the information generated and use it as feedback for statistical modeling (e.g. validation)) (“publishing module 13 is also configured to generate information that is provided to tracking module 14. Tracking module 14 is logic configured to track user behavior and to determine what people are clicking on, what they like, etc., by using the information received from publishing module 13 as well as information received or retrieved from web sites on which the content generated by content creation and publishing logic 616 is posted. Tracking module 14 generates tracking information and provides the tracking information as feedback 16 to demand analysis logic 614. Future demand prediction module 1 uses the feedback tracking information to enhance the statistical modeling for demand prediction and forecasting for the various topics that are determined based on the information retrieved from data sources 603”) (Col. 7 Ln. 65 - Col. 8 Ln. 13).
Pierre specifically doesn’t disclose, if an initial validation does not meet a predetermined level, however Mylinksi discloses, if an initial validation does not meet a predetermined level (Examiner's note: "FIG. 4 illustrates a flowchart of a method of implementing efficient fact checking using a broadening implementation approach according to some embodiments. In some embodiments, the step 104 of fact checking utilizes a broadening implementation approach. In the step 400, an exact match tact check is implemented. For example, the phrase "the President lied about taxes going up" has been parsed out for fact checking. In the step 400, that exact phrase is searched for within source information. If the exact phrase is found in a sufficient number of sources (e.g., above a lower threshold), then a result is returned ( e.g., true), in the step 402. If the exact phrase is not found ( e.g., equal to or below the lower threshold of sources), then a second fact check implementation is utilized, in the step 404. For example, the second tact check implements a pattern matching search. The pattern matching search is able to be implemented in any manner, for example, pattern matching utilizes subject-verb-object matching to determine if the same or a similar item matches. Any type of pattern matching is able to be implemented. If the second fact check returns with sufficient confidence (e.g., number of matches and/or sources above a lower threshold or a pattern matching confidence score above a lower threshold), then a result is returned, in the step 406. If the second fact check does not return with sufficient confidence, then a third fact check is implemented, in the step 408.'') (Col. 6 Ln. 7-31).
It would have been obvious to a person of ordinary skill in the art before the effective filling date of the applicant’s invention to generating an initial page for the product for purchase based on a long tail query phrase, the long term query phrase generated by appending a product name to a cluster input, the unstructured text electronically retrieved via online sources, inserting the selected text to the initial page based on the long tail query phrase, associating selected text with products, and obtaining ads based on the generated initial page, as disclosed by Pierre, if an initial validation does not meet a predetermined level, as taught by Myslinksi for the purpose to provides users with factually accurate information, limits the spread of misleading or incorrect information, provides additional revenue streams, and supports many other advantages.
As per claims 11, Pierre discloses, herein performing at least one further validation of the keywords comprises at least one of performing a validation based on keyword volume (i.e. The system further may track the information generated and use it as feedback for statistical modeling (e.g. validation)) (“Tracking module 14 is logic configured to track user behavior and to determine what people are clicking on, what they like, etc., by using the information received from publishing module 13 as well as information received or retrieved from web sites on which the content generated by content creation and publishing logic 616 is posted. Tracking module 14 generates tracking information and provides the tracking information as feedback 16 to demand analysis logic 614. Future demand prediction module 1 uses the feedback tracking information to enhance the statistical modeling for demand prediction and forecasting for the various topics that are determined based on the information retrieved from data sources 603.”) (Col. 7 Ln. 67- Col. 8 Ln. 12);
performing a validation based on a level of traffic the keywords have on search engines (“Tracking module 14 is logic configured to track user behavior and to determine what people are clicking on, what they like, etc., by using the information received from publishing module 13 as well as information received or retrieved from web sites on which the content generated by content creation and publishing logic 616 is posted. Tracking module 14 generates tracking information and provides the tracking information as feedback 16 to demand analysis logic 614. Future demand prediction module 1 uses the feedback tracking information to enhance the statistical modeling for demand prediction and forecasting for the various topics that are determined based on the information retrieved from data sources 603.”) (Col. 7 Ln. 67- Col. 8 Ln. 12);
performing a validation based on keyword relevance; or performing a validation based on keyword deduplication (“Tracking module 14 is logic configured to track user behavior and to determine what people are clicking on, what they like, etc., by using the information received from publishing module 13 as well as information received or retrieved from web sites on which the content generated by content creation and publishing logic 616 is posted. Tracking module 14 generates tracking information and provides the tracking information as feedback 16 to demand analysis logic 614. Future demand prediction module 1 uses the feedback tracking information to enhance the statistical modeling for demand prediction and forecasting for the various topics that are determined based on the information retrieved from data sources 603.”) (Col. 7 Ln. 67- Col. 8 Ln. 12).
Response to Arguments
With regards to § 112 rejections:
The examiner has considered an finds persuasive applicant’s arguments regarding rejections under 35 USC 112.
With regards to § 101 rejections:
The Examiner has considered but does not find persuasive applicant’s arguments regarding rejections under 35 USC 101. Providing more dynamic information is not a technical improvement. The improvement of providing more dynamic information is an improvement to the abstract idea itself and not technology or a technical field. Improving the content that user views because its essentially better content than some other content is an improvement to the abstract idea itself and not technology or a technical field. As a result, such rejection has been maintained.
With regards to § 103 rejections:
Applicant's arguments in regards to rejections under 35 USC 103 are moot in light of new grounds of rejection which have been necessitated by amendment.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. The following references have been cited to further show the state of the art with respect to analyze to identify one or both of classification values and categories to generate training data that is added to the training set and keywords that are unacceptable for publishing via the content publishing platform.
U.S. Pub. No. 20220179916 (“Muthuswamy”)
Muthuswamy discloses, the server computer receives general topic news articles (these may be structured or unstructured), and using NLP or other topic recognition techniques known to those having skill in this field, the server computer filters the provided articles relevant to preselected topics.
U.S. Pub. No. 20140101086 (“Lu”)
Lu discloses, the source data may include unstructured information, the global topic model building module may process the unstructured information according to any known method of interpreting unstructured information. For example, the global topic model building module may perform any known method for data mining or text analytics in order to categorize the information.
THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER STROUD whose telephone number is (571)272-7930. The examiner can normally be reached Mon. - Fri. 9AM-5PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Waseem Ashraff can be reached at (571) 270-3948. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
CHRISTOPHER STROUD
Primary Examiner
Art Unit 3621B
/CHRISTOPHER STROUD/Primary Examiner, Art Unit 3621