Prosecution Insights
Last updated: April 19, 2026
Application No. 19/205,864

Filtering and Scoring of Web Content

Non-Final OA §101§103§DP
Filed
May 12, 2025
Examiner
HU, JENSEN
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
Soci Inc.
OA Round
1 (Non-Final)
68%
Grant Probability
Favorable
1-2
OA Rounds
3y 7m
To Grant
95%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
365 granted / 539 resolved
+12.7% vs TC avg
Strong +27% interview lift
Without
With
+27.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 7m
Avg Prosecution
12 currently pending
Career history
551
Total Applications
across all art units

Statute-Specific Performance

§101
19.6%
-20.4% vs TC avg
§103
49.0%
+9.0% vs TC avg
§102
17.5%
-22.5% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 539 resolved cases

Office Action

§101 §103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claims 2-21 are pending in this application. Claim Objections Claim 4 is objected to for containing a grammatical error. The claim does not have a period at the end of the sentence. An appropriate correction is required. Claim 4 is objected for lack of antecedent basis. The claim reciting “wherein the mapping” in line 1 lacks antecedent basis. An appropriate correction is required. Claim 19 recites “calculating, by the first server computer, both a content item raw performance score and a relevance score for the retrieved content item of interest to produce a raw scored content item.” The specification does not clearly define how a “raw scored content item” is produced from a relevance score. Examiner respectfully requests the Applicant recite portions of the specification that detail this claim limitation. Claim Rejections - 35 USC § 101 Abstract idea? g g Double Patenting DP with prior application. 14/736196 16/446259. 17/345293. The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 2-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,299,056 in view of Wong US 2013/0204871 (hereinafter Wong) Although the claims at issue are not identical, they are not patentably distinct from each other. Instant application U.S. 12,299,056 2. A method for collecting, scoring, and organizing online content, the method comprising: characterizing, by a processor of a first server, based on a level of online interaction, an online user, and searching, in response to a query received at a first client computing device by the user, a plurality of webpages presented at a plurality of websites hosted by at least a second server, each of the plurality of webpages displaying one or more content groupings including content items of potential interest to the user, the searching resulting in identifying a plurality of content items from one or more of the content groupings, at least one of the identified plurality of content items having been interacted with by the online user; collecting, by a processor of the first server, the identified plurality of content items having been interacted with by the user, along with a first and a second set of data set associated with each of the collected content items; forming, by a processor of the first server, a content collection from the collected content items, the content collection including each collected content item and its associated first and second set of data, the first set of data including a characterization of the website from which the collected content item was collected, and the second set of data including one or more metrics evaluating respective collected content items of the content collection; weighting, by a processor of the first server, the first and second data of each respective content item, the weighting being at least partially based on the characterization of the user and each respective website and each of the one or more respective metrics, so as to determine a relative weighting of each collected content item of the content collection with respect to its relevance to the online user; calculating, by a processor of the first server, for each collected content item of the content collection, a raw content item performance score based on the relative weighting; calculating, by a processor of the first server, a content item performance score, so as to produce a collection of scored content items, the collection of scored content items including scored first data and scored second data; and ordering, by a processor of the first server, in relation to its associated scored first and scored second set of data, each of the content items of the content collection to produce an ordered content collection, the ordering being based on a relation of each content item to one or more characteristics of the characterized online user. 1. A method for collecting and scoring online content, the method comprising: searching, by a processor of a first server, in response to a first query received at a first client computing device, a plurality of webpages presented at a plurality of websites hosted by at least the first server, each of the plurality of webpages displaying one or more content groupings including one or more content items of potential interest, the searching resulting in identifying a plurality of content items from one or more of the content groupings; collecting, by a processor of a second server, the identified plurality of content items along with a first and a second set of data associated with each of the identified plurality of content items; forming, by a processor of the second server, a content collection from the identified plurality content items, the content collection including each collected content item and its associated first and second set of data, the first set of data including a characterization of the website from which the content item was collected, and the second set of data including one or more metrics evaluating respective content items of the content collection; weighting, by a processor of the second server, the first and second data of each respective content item, the weighting being at least partially based on the characterization of each respective website and each of the one or more respective metrics, so as to determine a relative weighting of each content item of the content collection; calculating, by a processor of the second server, for each collected content item of the content collection, a raw content item performance score based on the relative weighting; calculating, by a processor of the second server, a first content item performance score, to produce a collection of first scored content items, the collection of first scored content items including scored first data and scored second data; querying, via an API coupling the second server to the first server, in at least a passive manner, one or more of the plurality of webpages presented by at least the first server so as to continuously receive updated data, the updated data including one or more of updated content items of potential interest as well as updated first and second sets of data, and adjusting the first content item performance score based on the updated data; and ordering, by a processor of the server, in relation to its associated scored first and scored second set of data, each of the content items of the content collection to produce an ordered content collection, the ordering mapping each of the plurality of content items to its associated weighted first and second set of data. 14. A content scoring system for scoring communication content derived from a content collection for use in a generation of a communication to be published by a user, the system comprising: at least one data processor; and a memory storing instructions which, when executed by the at least one data processor, result in operations comprising: a content database storing the content collection containing a plurality of content items collected from one or more social media platforms or online publications, the plurality of content items being associated with a first set of data related to the social media platform or online publication from which individual content items of the content collection were collected, and a second set of data associated with particular content items, the second set of data including one or more metrics evaluating the content items; a scoring generator for scoring the content items contained in the content collection to produce scored content items, the scoring characterizing a performance of the content items and being based in part on the first and second sets of data as well as a determined relevancy to the user, the scoring generator comprising: a determination processor configured for: weighting the first set of data relative to at least one parameter, the one or more metrics of the second set of data, and further weighting the first and second data relative to the relevancy of the user, calculating a raw content item performance score for each collected content item of the content collection, the raw content item performance score being based at least partially on the weighting of the first and second data, and calculating a content item performance score by applying a normalization function to the raw content items performance score to produce a collection of scored content items. 11. A content scoring system for scoring communication content derived from a content collection, the system comprising: at least one data processor; and a memory storing instructions which, when executed by the at least one data processor, result in operations comprising: a content database storing the content collection containing a plurality of content items collected from one or more social media platforms or online publications, the plurality of content items being associated with a first set of data related to the social media platform or online publication from which individual content items of the content collection were collected, and a second set of data associated with particular content items, the second set of data including one or more metrics evaluating the content items; a scoring generator for scoring the content items contained in the content collection to produce scored content items, the scoring characterizing a performance of the content items and being based in part on the first and second sets of data, the scoring generator comprising: a determination processor configured for: weighting the first set of data relative to at least one parameter, and further for weighting the one or more metrics of the second set of data, calculating a raw content item performance score for each collected content item of the content collection, the raw content item performance score being based at least partially on the weighting of the first and second data, and calculating a content item performance score by applying a normalization function to the raw content items performance score to produce a collection of scored content items. 19. A method for retrieving and scoring online content for use by a user in evaluating communication content for use in generating a communication, the method comprising: characterizing, by a first server computer, based on a level of online interaction, one or more interests of the user; searching, by the first server computer via a network, a webpage having one or more published items being hosted by a second server computer, each published item including one or more content items, the searching resulting in an identified content item of interest to the user; retrieving, by the first server computer from the second server computer, the identified content item of interest from the webpage, a first set of data associated with the webpage from which the content item is retrieved, and a second set of data associated with the retrieved content item of interest, the first set of data including a characterization of the webpage from which the content item was retrieved, and the second set of data including one or more metrics evaluating the retrieved content item of interest; calculating, by the first server computer, both a content item raw performance score and a relevance score for the retrieved content item of interest to produce a raw scored content item, the content item raw performance score being based in part on the first and second sets of data and one or more determined relevance factors; storing, at a database associated with the first server computer, the raw scored content item to produce a collection of raw scored content items; and weighting, by the first server computer, the raw scored content item relative to one or more of the scored content items in the collection of scored content items to produce a weighted scored content item; and mapping, by the first server computer, each weighted and scored content item of the content collection to produce a final ordered scored content collection, which order is based at least partially on the weighted score for each respective content item. 18. A method for retrieving and scoring online content for use in evaluating communication content, the method comprising: querying in a passive and continuous manner, in accordance with a restriction, by a first server computer, via an API connection, a webpage having one or more published items being hosted by a second server computer, each published item including one or more content items, the querying resulting in a query identified content item of interest; retrieving, by the first server computer from the second server computer, the query identified content item of interest from the webpage, a first set of data associated with the webpage from which the content item is retrieved, and a second set of data associated with the retrieved query identified content item of interest, the first set of data including a characterization of the webpage from which the content item was retrieved, and the second set of data including one or more metrics evaluating the retrieved content item of interest; calculating, by the first server computer, a content item raw performance score for the retrieved query identified content item of interest to produce a raw scored content item, the content item raw performance score being based in part on the first and second sets of data; storing, at a database associated with the first server computer, the raw scored content item to produce a collection of raw scored content items; and weighting, by the first server computer, the raw scored content item relative to one or more of the scored content items in the collection of scored content items to produce a weighted scored content item; and ordering, by the first server computer, each weighted and scored content item of the content collection to produce a final ordered scored content collection, which order is based at least partially on the restriction and at least partially on the weighted score for each respective content item. Claims 2 and 19 additionally recites “characterizing, by a first server computer, based on a level of online interaction, one or more interests of the user.” However, Wong US 2013/0204871 (hereinafter Wong) teaches a method of characterizing, by a processor of a first server, based on a level of online interaction, an online user (see Wong, [0011] – [0012], “engagement data is collected and factored”). It would have been obvious to one skilled in the art at the time of the invention to characterize user interactions in order to efficiently retrieve relevant content items matching a user query. Claims 2-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-23 of U.S. Patent No. 11,036,817 in view of Wong US 2013/0204871 (hereinafter Wong) Although the claims at issue are not identical, they are not patentably distinct from each other. The claims are similarly rejected as disclosed above. Claims 2 and 19 additionally recites “characterizing, by a first server computer, based on a level of online interaction, one or more interests of the user.” However, Wong US 2013/0204871 (hereinafter Wong) teaches a method of characterizing, by a processor of a first server, based on a level of online interaction, an online user (see Wong, [0011] – [0012], “engagement data is collected and factored”). It would have been obvious to one skilled in the art at the time of the invention to characterize user interactions in order to efficiently retrieve relevant content items matching a user query. Claims 2-21 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 10,346,488 in view of Wong US 2013/0204871 (hereinafter Wong) Although the claims at issue are not identical, they are not patentably distinct from each other. Instant Application U.S. 10,346,488 Claim 2. 12. The method in accordance with claim 9, wherein the characterization of the website includes a number of webpages, webpage views, a webpage size, a number, frequency and/or consistency of one or more content items on the webpage. 13. The method in accordance with claim 12, wherein the one or more metrics evaluating the content item includes one or more of a number of content views, a content size, a content type, a content origin, HTML tag, a “like,” a “forward,” a “comment,” an exclamation point, a question mark, or a number, frequency, and/or consistency of the content items on the page. 2. weighting, by a processor of the first server, the first and second data of each respective content item, the weighting being at least partially based on the characterization of the user and each respective website and each of the one or more respective metrics, so as to determine a relative weighting of each collected content item of the content collection with respect to its relevance to the online user; calculating, by a processor of the first server, for each collected content item of the content collection, a raw content item performance score based on the relative weighting; calculating, by a processor of the first server, a content item performance score, so as to produce a collection of scored content items, the collection of scored content items including scored first data and scored second data 4. The method in accordance with claim 2, wherein the mapping further comprises delineating each of the plurality of content items to its associated weighted first and second set of data 8. The method in accordance with claim 3, wherein the method further comprises providing, for display at a graphical user interface of the first or the second client computing device, data encapsulating the content item performance score for each of the content items of the ordered content collection. 1. A method comprising: retrieving, by at least one data processor executing a scored content generator, a web content collection comprising: first metadata associated with the web content collection as a whole, the first metadata comprising one or more of a size of the web content collection, a location from which the web content collection is retrieved, characteristics of the location, a number of content items, type of content items, characteristics of the content items, line count, page count, memory size, addresses, HTML tags, traffic statistics, views, and titles, content items, and second metadata associated with the content items, the second metadata comprising metrics characterizing (i) the content items and (ii) at least a portion of the web content collection, the metrics characterizing the content items comprising an evaluation of one or more of a like, dislike, tweet, retweet, favorite, +1, view, unique view, fan, follow, viral posting, paid posting, storyteller posting, click, hide, comment, a share, a forward, a comment, and other evaluation by a viewer of at least one of the content items; calculating, by at least one data processor executing a scored content generator and based on the metrics, a content item performance score for each of the retrieved content items that each characterize a level of user interaction with the content items, wherein the calculating comprises: determining at least one parameter based on the second metadata; applying at least one pre-determined factor to the at least one parameter, the pre-determined factor characterizing a relative weighting of the at least one parameter; calculating a raw content item performance score based on the at least one parameter and the at least one pre-determined factor by applying at least one weighting to the at least one parameter, the weighting characterizing a content-type dependent scaling of a pre-weighted raw content item performance score; and calculating the content item performance score by applying a mapping function to the raw content item performance score, where the content item performance score is between a maximum value and a minimum value; and providing, by at least one data processor, data encapsulating the content item performance scores to a first computing system. The independent claims are similarly rejected as disclosed above. Claims 2 and 19 additionally recites “characterizing, by a first server computer, based on a level of online interaction, one or more interests of the user.” However, Wong US 2013/0204871 (hereinafter Wong) teaches a method of characterizing, by a processor of a first server, based on a level of online interaction, an online user (see Wong, [0011] – [0012], “engagement data is collected and factored”). It would have been obvious to one skilled in the art at the time of the invention to characterize user interactions in order to efficiently retrieve relevant content items matching a user query. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 2-6, 8-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wong US 2013/0204871 (hereinafter Wong) in view of Ramer et al., US 2009/0240568 (hereinafter Ramer). For claim 2, Wong teaches a method for collecting, scoring, and organizing online content, the method comprising: characterizing, by a processor of a first server, based on a level of online interaction, an online user (see Wong, [0011] – [0012], “engagement data is collected and factored”), and searching, in response to a query received at a first client computing device by the user (see Wong [0028], identify “relevance of network objects identified in response to a user’s query,” [0030] – [0031], “the user initiates a search for information of interest by entering a query into his computing device”), a plurality of webpages presented at a plurality of websites hosted by at least a second server, each of the plurality of webpages displaying one or more content groupings including content items of potential interest to the user, the searching resulting in identifying a plurality of content items from one or more of the content groupings (see Wong, [0026], “restaurant may be represented by a set of URLs representing different web pages, for example, the home page of the restaurant, a menu page, a reservations page, a collection of reviews of the restaurant on sites like Yelp and/or Zagat,” where web sites that host content represents one or more second servers, [0027], [0034], “The ingested URLs are indexed and stored by the web service in step 212,” where objects grouped within the web pages represent plurality of content items), at least one of the identified plurality of content items having been interacted with by the online user (see Wong, [0011] – [0012], [0035], “the web service collects social engagement data from various social media sites for each URL identified in response to the query”); collecting, by a processor of the first server, the identified plurality of content items having been interacted with by the user, along with…a second set of data set associated with each of the collected content items (see Wong, [0027], [0028], “utilize this social engagement data to score the relevance of network objects identified in response to a user’s query,” [0035], “the web service collects social engagement data from various social media sites for each URL identified in response to the query” where collection of social engagement data associated with URL represents second data set associated with objects); Ramer teaches “collecting, by a processor of the first server, the identified plurality of content items having been interacted with by the user, along with a first…set of data set associated with each of the collected content items” (see Ramer, [0136], [0174], “Contextual information that may be associated with a website,” [0176], collecting “contextual website data parameters” that characterize “Entertainment” and “Weather” websites, [0487], [0885], [0907], [0991] – [0092], [1043], [1050], [01390], where collected “contextual website data parameters” that identify a website associated with categories such as florist, caterer, photographer, or sports represents first set of data associated with collected content items). It would have been obvious to one skilled in the art at the time of the invention to modify the teachings of Wong with the teachings of Ramer to efficiently assess website characteristic data to return more relevant search results (see Ramer, [0176], [0972]). The combination further teaches forming, by a processor of the first server, a content collection from the collected content items, the content collection including each collected content item and its associated first and second set of data, the first set of data including a characterization of the website from which the collected content item was collected (see Ramer, [0136], [0174], [0176], collecting “contextual website data parameters” that characterize “Entertainment” and “Weather” websites, [0487], [0885], [0907], [0991] – [0092], [1043], [1050], [01390], where collected “contextual website data parameters” that identify a website associated with categories such as florist, caterer, photographer, or sports represents first set of data associated with collected content items), and the second set of data including one or more metrics evaluating respective collected content items of the content collection (see Wong, [0035], “the web service collects social engagement data from various social media sites for each URL identified in response to the query,” [0038] – [0044], generating of “active engagement score SPM” that includes factor of “sentiment score” which defined “the polarity of appropriateness for each share, comment, etc., for example in a range from -100 (most negative) to +100 (most positive), based on a semantic analysis of the tone or attitude or context of the sharing event”; see Ramer, [0082], [0255], [0382], collect user “web interactions” with content items associated with webpages and websites); weighting, by a processor of the first server, the first and second data of each respective content item, the weighting being at least partially based on the characterization of the user and each respective website and each of the one or more respective metrics, so as to determine a relative weighting of each collected content item of the content collection with respect to its relevance to the online user (see Wong, [0040] – [0044], “count of all relevant "shares" identified for various social networks” and “considering other factors and weighting results accordingly” and active engagement score SPM takes into account “Site Unique” visitors representing weighting the first and second set data; see Ramer, [0146], [0407], [0903], [0907], represents weighting the first and second data sets); calculating, by a processor of the first server, for each collected content item of the content collection, a raw content item performance score based on the relative weighting (see Wong, [0040] – [0044], “active engagement score SPM (Shares Per Thousand) that represents the number of sharing-events per thousand unique users for each URL” and associated “a sentiment score...may be factored into the social engagement score SPM” and sentiment score “defines the polarity of appropriateness for each share, comment, etc., for example in a range from -100 (most negative) to +100 (most positive), based on a semantic analysis of the tone or attitude or context of the sharing event” where active engagement score and factored sentiment score represent raw content item performance score; see Ramer, [0176], “The relevancy score may be a numerical summary of the statistical association between contextual website data parameters and mobile content parameters” and “receive a relevancy score for each of a plurality of websites. "Entertainment" websites may receive a higher relevancy score than the "Weather" websites,” [0407], [0903], calculating performance score); calculating, by a processor of the first server, a content item performance score, so as to produce a collection of scored content items, the collection of scored content items including scored first data and scored second data (see Wong, [0013], [0040] – [0049], “This information is normalized and weighted in order to create a quality score...for each piece of content so it can be compared and ranked,” where normalizing active engagement score to create quality score represents calculating content item performance score; see Ramer, [0176], [0407], [0903], scored first and second data; see Applicant’s Specification, US 2022/0067113, [0059], “A mapping function can be applied to the raw score in order to transform the raw score into a content item performance score within a minimum value and a maximum value, for example 0-10”); and ordering, by a processor of the first server, in relation to its associated scored first and scored second set of data, each of the content items of the content collection to produce an ordered content collection, the ordering being based on a relation of each content item to one or more characteristics of the characterized online user (see Wong, [0028], [0037] – [0044], “a ranked list of the documents is generated by the web service, the ranking based on the quality score developed in the processing step”, [0040] – [0049], provide “ranking” of objects in response to user query representing ordered scored content collection, where generating ranking from top ranking to lowest ranking represents ordered content collection; see Ramer, [0131], [0176], [0407], return “rankings” of content collection that includes first and second data). For claim 3, the combination teaches The method in accordance with claim 2, further comprising querying, via an API coupling the second server to the first server, in at least a passive manner, one or more of the plurality of webpages presented by at least the first server so as to continuously receive updated data, the updated data including one or more of updated content items of potential interest as well as updated first and second sets of data (see Wong, [0027], “analytical information is available through the application program interface (API) of the social network,” [0035], “collects social engagement data from various social media sites” that are collected through the API of these services for a specific URL” where collected social engagement data via API to social media servers represents querying, via API, for updated first data; see Ramer, [0488], “a mobile content website may be periodically analyzed for changes in content for purposes of assessing the relevance of keywords previously generated,” [01372], “connector may comprise an application programming interface or other code suitable to connect source and target data facilities” where application programming interface for periodically analyzing website changes represents passive querying via API to continuously receive updates for first and second data). For claim 4, the combination teaches the method in accordance with claim 2, wherein the mapping further comprises delineating each of the plurality of content items to its associated weighted first and second set of data (see Wong, [0028], [0037] – [0044], generate “active engagement score SPM” and “This information is normalized and weighted in order to create a quality score...for each piece of content so it can be compared and ranked,” where normalizing and ranking score to create quality score represents mapping and delineating items,). For claim 5, the combination teaches the method in accordance with claim 3, wherein the method further comprises adjusting respective first item performance scores based on the updated data (see Ramer, [0176], [0488], where “changes” to assessed “relevance of keywords” adjusts “relevancy score” for affected content items). For claim 6, the combination teaches The method in accordance with claim 5, wherein the method further comprises providing, for display at a graphical user interface of the first or a second client computing device, the ordered content collection (see Wong, [0029], [0037] – [0044], “results delivered to the client”). For claim 8, the combination teaches the method in accordance with claim 3, wherein the method further comprises providing, for display at a graphical user interface of the first or the second client computing device, data encapsulating the content item performance score for each of the content items of the ordered content collection (see Wong, [0028] – [0031], [0037] – [0044], where display ranked results represents encapsulated performance score data). For claim 9, the combination teaches the method in accordance with claim 8, wherein the data encapsulating the content item performance score includes one or more of the raw or normalized score, a webpage or website performance score, content data, webpage data, website data, an encoded file, and other data synthesized and/or extracted from the ordered content collection (see Wong, [0029], “results delivered to the client” comprise data content, [0040] – [0049], generate “active engagement score SPM” and “This information is normalized and weighted in order to create a quality score...for each piece of content so it can be compared and ranked,” representing normalizing score). For claim 10, the combination teaches the method in accordance with claim 2, wherein the method further comprises receiving a selection, by the user, of a content item to produce a selected content item (see Wong, [0031], “presented with a list of topics for selection. The search engine then returns a list of URLs and/or HTML links in response to the query”). For claim 11, the combination teaches the method in accordance with claim 10, wherein the method further comprises employing the selected content item for generating content for publishing on at least one webpage (see Wong, [0031], “presented with a list of topics for selection. The search engine then returns a list of URLs and/or HTML links in response to the query” where displaying of search results represents publishing on webpage to user for viewing). For claim 12, the combination teaches the method in accordance with claim 9, wherein the characterization of the website includes a number of webpages, webpage views, a webpage size, a number, frequency and/or consistency of one or more content items on the webpage (see Ramer, [0815], “total # of page views” represents characterization of website via webpage views). For claim 13, the combination teaches the method in accordance with claim 12, wherein the one or more metrics evaluating the content item includes one or more of a number of content views, a content size, a content type, a content origin, HTML tag, a “like,” a “forward,” a “comment,” an exclamation point, a question mark, or a number, frequency, and/or consistency of the content items on the page (see Wong [0027], “Facebook (Shares, Likes, Discussions), Twitter (Tweets, ReTweets), Google+ (+1s), Digg (Diggs), LinkedIn (Shares), Delicious, StumbleUpon (Stumbles), Reddit, and Pinterest (Pin count from button stats),” [0040] – [0044], “defines the polarity of appropriateness for each share, comment, etc.”). For claim 14, Wong teaches a content scoring system for scoring communication content derived from a content collection for use in a generation of a communication to be published by a user, the system comprising: at least one data processor; and a memory storing instructions which, when executed by the at least one data processor, result in operations comprising: a content database storing the content collection containing a plurality of content items collected from one or more social media platforms or online publications (see Wong, [0026] – [0028], “the processes described herein utilize this social engagement data to score the relevance of network objects identified in response to a user's query,” [0035] – [0036], “web service collects social engagement data from various social media sites for each URL identified in response to the query. For example, the number of shares, likes and discussions on Facebook, or tweets and retweets on Twitter, are active consumer engagement signals that can be collected through the API of these services for a specific UR” and “social sharing data are aggregated and processed by the web service,” [0038] – [0044], processing and storing social engagement data associated with relevant network objects), the plurality of content items being associated with…a second set of data associated with particular content items, the second set of data including one or more metrics evaluating the content items (see Wong, [0035], “the web service collects social engagement data from various social media sites for each URL identified in response to the query,” [0038] – [0044], generating of “active engagement score SPM” that includes factor of “sentiment score” which defined “the polarity of appropriateness for each share, comment, etc., for example in a range from -100 (most negative) to +100 (most positive), based on a semantic analysis of the tone or attitude or context of the sharing event”) Ramer teaches Ramer teaches “the plurality of content items being associated with a first set of data related to the social media platform or online publication from which individual content items of the content collection were collected” (see Ramer, [0136], [0174], “Contextual information that may be associated with a website,” [0176], collecting “contextual website data parameters” that characterize “Entertainment” and “Weather” websites, [0487], [0885], [0907], [0991] – [0092], [1043], [1050], [01390], where collected “contextual website data parameters” that identify a website associated with categories such as florist, caterer, photographer, or sports represents first set of data associated with collected content items). It would have been obvious to one skilled in the art at the time of the invention to modify the teachings of Wong with the teachings of Ramer to efficiently assess website characteristic data to return more relevant search results (see Ramer, [0176], [0972]). The combination further teaches a scoring generator for scoring the content items contained in the content collection to produce scored content items, the scoring characterizing a performance of the content items and being based in part on the first and second sets of data as well as a determined relevancy to the user (see Wong, [0038] – [0044], scoring based on social engagement data related to collection from social media site and related to object(s) within media site), the scoring generator comprising: a determination processor configured for: weighting the first set of data relative to at least one parameter, the one or more metrics of the second set of data, and further weighting the first and second data relative to the relevancy of the user (see Wong, [0040] – [0044], “count of all relevant "shares" identified for various social networks” and “considering other factors and weighting results accordingly” and active engagement score SPM takes into account “Site Unique” visitors representing weighting the second set data; see Ramer, [0146], [0407], [0903], [0907], represents weighting the first and second data sets), calculating a raw content item performance score for each collected content item of the content collection, the raw content item performance score being based at least partially on the weighting of the first and second data (see Wong, [0040] – [0044], “active engagement score SPM (Shares Per Thousand) that represents the number of sharing-events per thousand unique users for each URL” where active engagement score represent raw content item performance score; see Ramer, [0176], “The relevancy score may be a numerical summary of the statistical association between contextual website data parameters and mobile content parameters” and “receive a relevancy score for each of a plurality of websites. "Entertainment" websites may receive a higher relevancy score than the "Weather" websites,” [0407], [0903], calculating performance score), and calculating a content item performance score by applying a normalization function to the raw content items performance score to produce a collection of scored content items (see Wong, [0040] – [0049], generate “active engagement score SPM” and “This information is normalized and weighted in order to create a quality score...for each piece of content so it can be compared and ranked,” where normalizing active engagement score to create quality score represents content item performance score; see Ramer, [0176], [0407], [0903], scored first and second data). For claim 15, the combination teaches the platform in accordance with claim 14, further comprising a first server, the first server instantiating the at least one data processor (see Wong, [0022], “a computer system could include more than one processor”). For claim 16, the combination teaches the platform in accordance with claim 15, further comprising a querying processor for querying, via an API coupling between a second server and the first server, in at least a passive manner, one or more of the plurality of webpages presented by at least the first server so as to continuously receive updated data, the updated data including one or more of updated content items of potential interest as well as updated first and second sets of data (see Wong, [0027], “analytical information is available through the application program interface (API) of the social network,” [0035], “collects social engagement data from various social media sites” that are collected through the API of these services for a specific URL” where collected social engagement data via API to social media servers represents querying, via API, for updated first data; see Ramer, [0488], “a mobile content website may be periodically analyzed for changes in content for purposes of assessing the relevance of keywords previously generated,” [01372], “connector may comprise an application programming interface or other code suitable to connect source and target data facilities” where application programming interface for periodically analyzing website changes represents passive querying via API to continuously receive updates for first and second data). For claim 17, the combination teaches the platform in accordance with claim 16, further comprising a communication generator for selecting one or more scored content items for incorporation into a communication, the selecting being based at least in part on a score of each selected content item, and generating a communication, the generated communication comprising at least a portion of each of the selected scored content items (see Wong, [0029], [0032], select at least one stored search results for communication). For claim 18, the combination teaches the platform in accordance with claim 17, further comprising a mapping processor for mapping and ordering each of the scored content items of the content collection in relation to its associated weighted first and weighted second set of data, to produce a functionally ordered and scored content collection (see Wong, [0028], [0037] – [0044], “normalized” score data represents mapping and where generating ranking from top ranking to lowest ranking represents ordering of quality score to produce functionally ordered and score content collection). For claim 19, Wong teaches a method for retrieving and scoring online content for use by a user in evaluating communication content for use in generating a communication, the method comprising: characterizing, by a first server computer, based on a level of online interaction, one or more interests of the user (see Wong, [0011] – [0012], “engagement data is collected and factored”); searching, by the first server computer via a network, a webpage having one or more published items being hosted by a second server computer, each published item including one or more content items, the searching resulting in an identified content item of interest to the user (see Wong, [0011] – [0012], [0026], [0035], search “web pages”, in response to user queries” and where “the web service collects social engagement data from various social media sites for each URL identified in response to the query”); retrieving, by the first server computer from the second server computer, the identified content item of interest from the webpage…a second set of data associated with the retrieved content item of interest…and the second set of data including one or more metrics evaluating the retrieved content item of interest (see Wong, [0026], “restaurant may be represented by a set of URLs representing different web pages, for example, the home page of the restaurant, a menu page, a reservations page, a collection of reviews of the restaurant on sites like Yelp and/or Zagat,” where web sites that host content represents one or more second servers, [0027], [0034], “The ingested URLs are indexed and stored by the web service in step 212,” where objects grouped within the web pages represent plurality of content items, [0035], “the web service collects social engagement data from various social media sites for each URL identified in response to the query,” [0038] – [0044], generating of “active engagement score SPM” that includes factor of “sentiment score” which defined “the polarity of appropriateness for each share, comment, etc., for example in a range from -100 (most negative) to +100 (most positive), based on a semantic analysis of the tone or attitude or context of the sharing event” representing second set of metric data); Ramer teaches “a first set of data associated with the webpage from which the content item is retrieved” (see Ramer, [0136], [0174], “Contextual information that may be associated with a website,” [0176], collecting “contextual website data parameters” that characterize “Entertainment” and “Weather” websites, [0487], [0885], [0907], [0991] – [0092], [1043], [1050], [01390], where collected “contextual website data parameters” that identify a website associated with categories such as florist, caterer, photographer, or sports represents first set of data associated with collected content items) and “the first set of data including a characterization of the webpage from which the content item was retrieved” (see Ramer, [0136], [0174], [0176], collecting “contextual website data parameters” that characterize “Entertainment” and “Weather” websites, [0487], [0885], [0907], [0991] – [0092], [1043], [1050], [01390], where collected “contextual website data parameters” that identify a website associated with categories such as florist, caterer, photographer, or sports represents first set of data associated with collected content items). It would have been obvious to one skilled in the art at the time of the invention to modify the teachings of Wong with the teachings of Ramer to efficiently assess website characteristic data to return more relevant search results (see Ramer, [0176], [0972]). The combination further teaches calculating, by the first server computer, both a content item raw performance score (see Wong, [0040] – [0044], “active engagement score SPM (Shares Per Thousand) that represents the number of sharing-events per thousand unique users for each URL” and associated “a sentiment score...may be factored into the social engagement score SPM” and sentiment score “defines the polarity of appropriateness for each share, comment, etc., for example in a range from -100 (most negative) to +100 (most positive), based on a semantic analysis of the tone or attitude or context of the sharing event” where active engagement score and factored sentiment score represent raw content item performance score; see Ramer, [0176], “The relevancy score may be a numerical summary of the statistical association between contextual website data parameters and mobile content parameters” and “receive a relevancy score for each of a plurality of websites. "Entertainment" websites may receive a higher relevancy score than the "Weather" websites,” [0407], [0903], calculating performance score) and a relevance score for the retrieved content item of interest to produce a raw scored content item, the content item raw performance score being based in part on the first and second sets of data and one or more determined relevance factors (see Wong, [0013], [0040] – [0049], “This information is normalized and weighted in order to create a quality score...for each piece of content so it can be compared and ranked,” where normalizing active engagement score to create quality score represents calculating relevance performance score); storing, at a database associated with the first server computer, the raw scored content item to produce a collection of raw scored content items (see Wong, [0040] – [0044]; see Ramer, [0176], [0407], [0903], scores are stored to analyze relevance to user queries); and weighting, by the first server computer, the raw scored content item relative to one or more of the scored content items in the collection of scored content items to produce a weighted scored content item (see Wong, [0040] – [0044], “weighting results accordingly” in calculating active engagement score; see Ramer, [0146], [0407], [0903], [0907], represents weighting the first and second data sets for relevance scoring); and mapping, by the first server computer, each weighted and scored content item of the content collection to produce a final ordered scored content collection, which order is based at least partially on the weighted score for each respective content item (see Wong, [0013], [0028], [0037] – [0044], “normalized” or transformed data for scoring data represents mapping and where generating “ranked” content items represents final ordered scored content collection; see Applicant’s Specification, US 2022/0067113, [0059], “A mapping function can be applied to the raw score in order to transform the raw score into a content item performance score within a minimum value and a maximum value, for example 0-10”). For claim 20, the combination teaches the method in accordance with claim 19, further comprising querying, via an API coupling between a second server and the first server, in at least a passive manner, one or more of the plurality of webpages presented by at least the first server so as to continuously receive updated data, the updated data including one or more of updated content items of potential interest as well as updated first and second sets of data (see Wong, [0027], “analytical information is available through the application program interface (API) of the social network,” [0035], “collects social engagement data from various social media sites” that are collected through the API of these services for a specific URL” where collected social engagement data via API to social media servers represents querying, via API, for updated first data; see Ramer, [0488], “a mobile content website may be periodically analyzed for changes in content for purposes of assessing the relevance of keywords previously generated,” [01372], “connector may comprise an application programming interface or other code suitable to connect source and target data facilities” where application programming interface for periodically analyzing website changes represents passive querying via API to continuously receive updates for first and second data). For claim 21, the combination teaches the method in accordance with claim 20, further comprising: evaluating the final ordered scored content collection, and selecting a final scored content item for use in a communication (see Wong, [0037], [0044]; see Ramer, [0131], [0405] – [0408], “ranked” scored items presented as search results). Claim(s) 7 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wong US 2013/0204871 (hereinafter Wong) and Ramer et al., US 2009/0240568 (hereinafter Ramer) and further in view of Chien et al., US 2009/0055257 (hereinafter Chien). For claim 7, Chien teaches the method in accordance with claim 5, wherein the relative weighting comprises characterizing a content-type dependent scaling of a pre-weighted raw content item performance score (see Chien, [0008], [0050], “the affinity score has a scale from 0.00 to 1.00, however, one of ordinary recognizes other scales such as, for example, 0 to 100, or another scale”). It would have been obvious to one skilled in the art at the time of the invention to modify the teachings of Wong and Ramer with the teachings of Chien to provide adjustment scaling of score data to reflect the variable data set being scored (see Chien, [0008], [0050]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Bennett et al., US 2016/0342692. [0031] – [0032]. Yan et al., US 2016/0275554. Davies US 2014/0279782. [0023], [0032]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JENSEN HU whose telephone number is (571)270-3803. The examiner can normally be reached Monday - Friday 9-5 PT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JENSEN HU/Primary Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

May 12, 2025
Application Filed
Jan 23, 2026
Non-Final Rejection — §101, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596737
METHOD FOR GENERATING USER INTEREST PROFILE, ELECTRONIC DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12596680
FILE TIERING WITH DATA INTEGRITY CHECKS
2y 5m to grant Granted Apr 07, 2026
Patent 12596722
EFFICIENT STORAGE, RETRIEVAL, AND/OR RENDERING OF PERSONAL ENTRIES
2y 5m to grant Granted Apr 07, 2026
Patent 12585692
Station Library Creation for a Media Service
2y 5m to grant Granted Mar 24, 2026
Patent 12580047
BIOLOGICAL SEQUENCE COMPRESSION USING SEQUENCE ALIGNMENT
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
68%
Grant Probability
95%
With Interview (+27.1%)
3y 7m
Median Time to Grant
Low
PTA Risk
Based on 539 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month