DETAILED ACTION
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 18 April 2025 has been entered.
Status
This First Action Final Office Action is in response to the communication filed on 22 December 2025. Claims 2-3, 7, 9, 12-14, 16-17, 21, 23, and 26-28 have been cancelled previously or currently, claims 1, 15, and 29 have been amended, and no claims have been added. Therefore, claims 1, 4-6, 8, 10-11, 15, 18-20, 22, 24-25, and 29 are pending and presented for examination.
The present application is being examined under the pre-AIA first to invent provisions.
Response to Amendment
A Summary of the Response to Applicant’s Amendment:
Applicant’s amendment does not overcome the rejection(s) under 35 USC § 101; therefore, the Examiner maintains the rejection(s), while updating phrasing in keeping with current examination guidelines.
Applicant’s arguments are found to be not persuasive; please see the Response to Arguments below.
Examiner’s Notes
The Examiner notes that recitations to Applicant's specification, as below, are in reference to the Pre-Grant Publication of Applicant’s specification.
Independent claims 1, 15, and 29 each recite “generating … a rating … that identifies a likelihood the webpage contains objectionable content …, the objectionable content being objectionable to an advertiser placing one or more adjacent advertising content item”. Although Applicant provides no description regarding how a webpage is considered “objectionable” (e.g., a hunting supply advertiser may deem guns and/or ammunition non-objectionable and even desirable, whereas a peace- or religious-oriented advertiser may have an opposing view), it is assumed that the advertiser must have previously provided criteria or identifying characteristics regarding what is and is not objectionable to that entity. There does not appear to be any other means of determining the advertiser’s subjective view of what is considered objectionable (the only other apparent alternative is to, after page analysis, send the page content and/or analysis to the advertiser for a determination of whether the webpage is “objectionable”, which essentially defeats the purpose of the invention).
Independent claims 1, 15, and 29 each also recite converting, applying, and/or generating steps as performed by or to a component or model “which includes machine learning”. The Examiner has reviewed the specification to understand the scope and breadth of what this means, including with regard to whether there is any invention or discovery related to machine learning. The only discussion the Examiner can find is that this may be performed “by a classification component or any other suitable machine learning mechanism” (Applicant ¶¶ 0088, 0089, and 0090), and that “classification component … can be configured to facilitate the seamless inclusion and removal of models. For example, as improved machine learning approaches or improved models are developed, an updated model 940 can be introduced to classification component” (Applicant ¶ 0093). Therefore, the claims are understood to encompass, or include, ANY machine learning mechanism or type, such as machine learning algorithms or techniques that were preexisting to the claimed invention, and that there is no conveying of any indication of change, modification, improvement, innovation, invention, or discovery related to the machine learning as claimed.
Independent claims 1, 15, and 29 each also further recite at the combining of estimated probabilities step “wherein the combination of the first estimated probability and the second estimated probability is weighted based on a first bias associated with the first rating model that generated the first estimated probability and a second bias associated with the second model that generated the second estimated probability”. The Examiner has reviewed the specification to understand the scope and breadth of what this means, including any indications of the term “weigh” or its derivations (i.e., weight, weighting, weighted, etc.). The only discussion the Examiner can find is related to applying weights to evidence or rating models as mentioned at Applicant ¶ 0019, assigning weights to a category at Applicant ¶ 0062, assigning weights to categories and/or a page at Applicant ¶ 0064, “apply[ing] a weighting scheme to the evidence” at Applicant ¶ 0095, and/or that a “rating model assigns a utility or weight to unlabeled instances” at Applicant ¶ 0102, but none of these discuss any bias as a relation or basis of weighting. The only discussion of bias is Applicant ¶ 0098, as Applicant argues for support (6 July 2023 Remarks at 10), but that merely indicates that “an individual model or a class of models can have biases that lead to mistaken inferences” and therefore “the output of the ensemble is combined to smooth out the biases of the individual models”. There is no indication of the combining being weighted (only other data or factors are conveyed as being weighted), nor that any weighting is based on bias. There is no mention or indication of quantifying, measuring, calculating, or determining bias, but rather the possible presence or existence of bias is conveyed. However, the Examiner recognizes that it would appear that any time two (or more) probabilities are combined as indicated, there is apparently some form of weighting – e.g., an average of the probabilities would be a 1/2 weighting of each probability when there are two probabilities being combined, a 1/3 weighting when three are combined, etc., and/or choosing one probability over one or multiple other probability(ies) would be a 100% weight for the selected probability and 0% for the other(s), and similar weighting for any other combination. The “biases” are not explained or exemplified, but rather are merely indicted that they “can” exist, so the term “bias” appears to mean and/or encompass anything that leads to an absolutely perfect result (and a perfect result, it appears, would be virtually impossible for every iteration since these are “estimated” probabilities. Therefore, the claims are understood and interpreted to encompass or include all forms of combining since the weighting based on biases would appear to be inherent to combining probabilities. The Examiner notes that any other interpretation, should Applicant attempt to rebut this understanding and interpretation, would appear to require a § 112 rejection for lack of written support.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1, 4-6, 8, 10-11, 15, 18-20, 22, 24-25, and 29 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Please see the following Subject Matter Eligibility (“SME”) analysis:
For analysis under SME Step 1, the claims herein are directed to a method (claims 1, 4-6, 8, and 10-11), system (claims 15, 18-20, 22, and 24-25), and non-transitory computer-readable medium (claim 29), which would be classified under one of the listed statutory classifications (SME Step 1=Yes).
For analysis under revised SME Step 2A, Prong 1, independent claim 1 recites A method for rating webpages for safe online advertising,
the method comprising:
receiving, by a hardware processor, a uniform resource locator corresponding to a webpage in association with a request to rate the webpage, wherein the uniform resource locator is included in a list of webpages that a user is interested in placing advertising content;
determining, by the hardware processor, whether the uniform resource locator should be prioritized for rating the website based on a frequency of occurrence of the webpage in an advertising stream associated with the user;
in response to determining that the uniform resource locator should be prioritized for rating the website based on the frequency of occurrence of the webpage in the advertising stream associated with the user, selecting, by the hardware processor, a plurality of evidentiary sources relating to the uniform resource locator, each of the plurality of evidentiary sources having one or more pieces of evidence;
determining, by the hardware processor, an optimized subset of evidentiary sources from the plurality of evidentiary sources, wherein text-based evidentiary sources are selected in the optimized subset of evidentiary sources and image-based evidentiary sources are excluded from the optimized subset of evidentiary sources based at least in part on content of the plurality of evidentiary sources and a budget parameter:
extracting, by the hardware processor, a plurality of pieces of evidence including text information and site information from pages corresponding to the optimized subset of evidentiary sources from one or more remote servers by transmitting a plurality of evidence requests to the one or more remote servers for at least one of the text information and the site information and receiving responses to each of the plurality of evidence requests that includes at least one of the text information and the site information;
converting, by the hardware processor, each of the plurality of pieces of evidence from the plurality of evidence responses into a plurality of instances that describe the webpage and that are suitable for processing by a classification component, which includes machine learning, each instance is a structured collection of at least a portion of the plurality of pieces of evidence corresponding to the webpage;
applying, by the hardware processor, each of the plurality of instances that describe the webpage to a first rating model, which includes machine learning and which is associated with a first content category;
generating, by the hardware processor, a first estimated probability that the webpage belongs to one of a first plurality of severity classes of the first content category having a first confidence level using the first rating model, which includes machine learning and which is associated with the first content category,
applying, by the hardware processor, each of the plurality of instances that describe the webpage to a second rating model, which includes machine learning and which is associated with a second content category;
generating, by the hardware processor, a second estimated probability that the webpage belongs to one of a second plurality of severity classes of the second content category having a second confidence level using the second rating model, which includes machine learning and which is associated with the second content category;
combining, by the hardware processor, the first estimated probability and the second estimated probability to generate an aggregate probability vector associated with the first content category and the second content category for the webpage, wherein the combination of the first estimated probability and the second estimated probability is weighted based on a first bias associated with the first rating model that generated the first estimated probability and a second bias associated with the second model that generated the second estimated probability and wherein the first estimated probability and the second estimated probability are transmitted to a calibration function that generates the aggregate probability vector;
generating, by the hardware processor, a rating for the webpage that identifies a likelihood that the webpage contains objectionable content of the first content category and the second content category based at least in part on the aggregate probability vector, the objectionable content being objectionable to an advertiser placing one or more adjacent advertising content item;
transmitting the generated rating for the webpage in response to the request to rate the webpage: and
receiving an indication regarding whether to place the advertising content adjacent to content appearing on the webpage.
Independent claim 15 is analyzed in the same manner as claim 1, except is directed to a system for rating webpages for safe online advertising, the system comprising: a server having a hardware processor that is recited as performing the same or similar activities as at claim 1.
Independent claim 29 is also analyzed in the same manner as claim 1, except is directed to a non-transitory computer-readable medium containing computer-executable instructions that, when executed by a processor, cause the processor to perform a method for rating webpages for safe online advertising, the method comprising performing the same or similar activities as at claim 1.
The underlined portions of the claims are an indication of elements additional to the abstract idea (to be considered below).
As such, the claim(s) indicate(s) a request to rate a publication or venue for advertising, gathering evidence related to the publication or venue and limiting the evidence to text-based sources for extraction of text and site information, converting the evidence to instances, applying the instances to first and second rating models that provide a probability and confidence level as output, combining the rating probabilities and confidence levels into an aggregated probability output, generating a rating identifying a likelihood of objectionable content based on the aggregate probability, transmitting the generated rating in response to the request to rate the publication, and receiving an indication regarding whether to place advertising.
The claim elements may be summarized as the idea of analyzing a publication or venue such as a webpage for advertising purposes; however, the Examiner notes that although this summary of the claims is provided, the analysis regarding subject matter eligibility considers the entirety of the claim elements, both individually and as a whole (or ordered combination). This idea is within the following grouping(s) of subject matter:
(a) Mathematical concepts (e.g., relationships, formulas, equations, and/or calculations) as based on the use of machine learning, applying “instances” (i.e., data) to a rating model, generating a first and second probability, combining the first and second probabilities with weighting into an aggregated vector, and generating a likelihood (i.e., a probability);
(b) Certain methods of organizing human activity (e.g. fundamental economic principles or practices such as hedging, insurance, mitigating risk; commercial or legal interactions such as agreements, contracts, legal obligations, advertising, marketing or sales activities/behaviors, or business relations; and/or managing personal behavior or relationships between people such as social activities, teaching, and following rules or instructions) as based on gathering information to convert to (i.e., form) a description of a publication site so as to determine appropriateness for the placement of advertising content; and
(c) Mental processes (e.g., concepts performed in the human mind such as observation, evaluation, judgment, and/or opinion) based on the evaluation, judgment, and/or opinion regarding what may or may not be considered objectionable and/or appropriate for content and whether an advertiser should or would find such content appropriate for advertising.
The dependent claims (claims 4-6, 8, 10-11, 18-20, 22, and 24-25) appear to be encompassed by the abstract idea of the independent claims since they merely indicate merging evidence from sources into a page object for the URL (claims 4 and 18), receiving feedback, collecting more evidence, and revising the page object (claims 5 and 19), mapping facets of evidence to features (claims 6 and 20), what category labels are used (e.g., adult content, guns, bombs, etc.) (claims 8 and 22), applying weights to evidence (claims 10 and 24), and/or applying weights to rating models (claims 11 and 25).
The Examiner notes that the modeling includes any form or type of modeling, including binomial (e.g., Yes/No, True/False, etc.) and/or inclusion within a category severity group such as, essentially, the MPAA movie rating system (see Applicant ¶¶ 0049, 0052). As such, the mathematics and modeling need not be so complex as to require computer activity or calculation, but may be performed in the human mind.
The Examiner further notes that History of Ratings, from FilmRatings.com, author unknown, downloaded from https://www.filmratings.com/History on 22 October 2019 indicates that the current Classification & Ratings Administration (CARA) emerged in 1968, the timeline indicating that in 1968 “The modern voluntary movie rating system is born. Movies are rated G, M, R or X. The M later become PG”, in 1984 “The ‘PG-13’ rating is introduced”, and in 1990 “The ‘NC-17’ rating replaces the ‘X’ rating. Rating descriptors are added, giving parents more information about the elements of a movie.”
Further, Roos, Dave, How the MPAA Works, downloaded from How Stuff Works at https://entertainment.howstuffworks.com/mpaa.htm/printable on 22 October 2019 indicates that the MPAA rating “letter system - - G, PG, PG-13, R, and NC-17” is “ubiquitous” (p. 1, second ¶, and again at p. 2, first ¶ after the heading “MPAA and Movie Ratings”), that ratings are assigned by CARA and “When a film is submitted to CARA, it is viewed by members of an eight- to 13-person rating board, overseen by senior raters”, and that math and combining ratings are used, at least in some cases such as deciding between PG and PG-13, and/or PG-13 and R ratings. As such, Roos indicates modeling as encompassed by the claims (i.e., performing the math indicated or required within the broadest reasonable interpretation, and weighting the ratings provided by the rating board, as based on biases that may/can be present for panelists), that this is long-established human activity, and that this is apparently done (or has been done, was done) by mental processes (i.e., by human panels and since 1968, before computers were prolific or commonly accessible), at least within the field of movies for content.
It would appear that advertisers have for a long time selected publications or venues on which to advertise or not advertise as evidenced by How to Write Advertisements That Sell, anonymous, from A. W. Shaw Co., dated 1912, such as at p. 28 indicating that advertisers of certain products (e.g., certain books) need to consider the publication to advertise in for each product/book – a gun and/or ammunition advertiser may find the farm paper to be appropriate and not objectionable, but may deem the religious weekly objectionable to use some examples mentioned (see also p. 35, selecting publications according to prospect group; Part IV, Chaps. XIII-XVI, pp. 87-108, Planning Out Mediums, Space, and Appropriations; p. 112 “Some firms and agencies have kept records of this sort for years. They know accurately before they start a campaign what pieces of copy ,’take’ and what mediums bring the best returns on their offer.”; et seq.), where the subtitle indicates that at least 146 companies were practicing this almost a century prior to Applicant’s claim to invention.
Therefore, the claims are found to be directed to an abstract idea.
For analysis under revised SME Step 2A, Prong 2, the above judicial exception is not integrated into a practical application because the additional elements do not impose a meaningful limit on the judicial exception when evaluated individually and as a combination. The additional elements are that the content is online, using uniform resource locators (URLs) and server(s), for webpages, via a hardware processor, and that the intended purpose is to determine whether to place advertising at the webpage (as a publication or venue) (at claim 1), the system comprising: a server having a hardware processor (at claim 15), and a non-transitory computer-readable medium containing computer-executable instructions that, when executed by a processor, cause the processor to perform a method for rating webpages (at claim 29). These additional elements do not reflect an improvement in the functioning of a computer or an improvement to other technology or technical field, effect a particular treatment or prophylaxis for a disease or medical condition (there is no medical disease or condition, much less a treatment or prophylaxis for one), implement the judicial exception with, or by using in conjunction with, a particular machine or manufacture that is integral to the claim, effect a transformation or reduction of a particular article to a different state or thing (there is no transformation/reduction of a physical article), and/or apply or use the judicial exception in some other meaningful way beyond generically linking use of the judicial exception to a particular technological environment.
The online aspect, use of URLs, servers, webpages, and/or a hardware processor is merely “Adding the words ‘apply it’ (or an equivalent) with the judicial exception, or mere instructions to implement an abstract idea on a computer” as “Limitations that the courts have found not to be enough to qualify as ‘significantly more’", per MPEP § 2106.05.I.A. The intended purpose of using the rating to determine whether to place advertising at a webpage is a field of use that MPEP § 2106.05(h) indicates is also generally not regarded as “significantly more.”
The claims appear to merely apply the judicial exception, include instructions to implement an abstract idea on a computer, or merely use a computer as a tool to perform the abstract idea. The additional elements appear to merely add insignificant extra-solution activity to the judicial exception and/or generally link the use of the judicial exception to a particular technological environment or field of use.
For analysis under SME Step 2B, the claim(s) does/do not include additional elements that are sufficient to amount to significantly more than the judicial exception because as indicated above, the additional elements merely indicate that the idea is applied via a computer, and/or in a computing environment, and for the intended purpose of assessing whether to place advertising (as a field of use). Therefore, regardless of, or aside from, whether the additional elements are considered well-understood, routine, conventional activity (WURC), they are found to be insignificant on their own.
Further, however, when the entirety of the activities of the claim are considered, they essentially indicate gathering or receiving information (such as over a network), limiting what information is considered, extracting information, converting and applying that information to the models and combining the results to generate a rating, transmitting the rating, and receiving a response. The particular data selected or claimed is insignificant as a field of use per MPEP § 2106.05(h) (citing Electric Power Group, et al.), and the gathering, transmitting, and receiving are insignificant as receiving or transmitting data over a network per MPEP § 2106.05(d)(II) (citing Symantec, TLI Comms., OIP Techs., and BuySAFE). The extracting is insignificant as electronically scanning or extracting data and/or merely selecting the data per the field of use (see MPEP § 2106.05(d)(II), citing Content Extraction, and MPEP § 2106.05(h) citing Electric Power Group, et al. for field of use). And the applying data to the models and combining results is performing repetitive calculations (MPEP § 2106.05(d)(II), citing Flook and Bancorp). Therefore, although no elements are specifically identified as insignificant since considered to be WURC activity, the claim elements – if they were considered under the WURC activity rubric – would all appear to all be insignificant as WURC activity. The combination of elements further appears to result in what one would expect from the combination of elements as an ordered combination; there does not appear to be any added benefit, inventiveness, or significance based on an ordered combination.
There is no indication the Examiner can find in the record regarding any specialized computer hardware or other “inventive” components, but rather, the claims merely indicate computer components which appear to be generic components and therefore do not satisfy an inventive concept that would constitute “significantly more” with respect to eligibility. Applicant ¶¶ 0114 and 0127-0128 indicate that the client and server computers implementing the claim(s) may be general purpose computers, and Applicant ¶ 0128 specifically indicates that “[t]he procedures presented herein are not inherently related to a particular computer or other apparatus”.
The individual elements therefore do not appear to offer any significance beyond the application of the abstract idea itself, and there does not appear to be any additional benefit or significance indicated by the ordered combination, i.e., there does not appear to be any synergy or special import to the claim as a whole other than the application of the idea itself.
The dependent claims (claims 2-6, 8, 10-11, 16-20, 22, and 24-25) appear to merely indicate criteria for selecting sources (claims 2-3 and 16-17), merging evidence into a page object (claims 4 and 18), using feedback to revise the page object (claims 5 and 19), mapping evidence facets to features (claims 6 and 20), category types (claims 8 and 22), weighting evidence from sources (claims 10 and 24), and/or weighting the rating models (claims 11 and 25), and therefore only limit the application of the idea, and not add significantly more than the idea. MPEP § 2106.05(d)(II) indicates these as insignificant as included within the listed activities courts have determined to be insignificant.
Therefore, SME Step 2B=No, any additional elements, whether taken individually or as an ordered whole in combination, do not amount to significantly more than the abstract idea, including analysis of the dependent claims.
Please see the Subject Matter Eligibility (SME) guidance and instruction materials at https://www.uspto.gov/patent/laws-and-regulations/examination-policy/subject-matter-eligibility, which includes the latest guidance, memoranda, and update(s) for further information.
Allowable Subject Matter
Claims 1-6, 8, 10-11, 15-20, 22, 24-25, and 29 are indicated to be allowable over the prior art.
The following is a statement of reasons for the indication of allowable subject matter:
The claims recite receiving a request to rate a webpage, selecting sources from which to consider evidence, extracting the evidence (including text, image, and site information) from those sources by transmitting a request for the evidence, converting the evidence into instances (as a structured collection of pieces of evidence), applying the instances to a first category rating model and generating a probability and confidence level the webpage belongs in a severity class, applying the instances to a second category rating model and generating a second probability and confidence level of belonging to a severity class, combining the probabilities, and generating a webpage rating identifying a likelihood of objectionable content, transmitting the rating in response to the request to rate the webpage, and receiving an indication whether to place advertising content at the webpage.
The closest art of record includes Buehl (U.S. Patent No. 5,912,696) teaches to “include[ ] an N-dimensional rating vector encoded into the leader portion or meta-data portion of an analog or digitally encoded media asset” (Buehl at column:lines 1:66-2:2; citation hereafter by number only), “[t]his rating vector has a plurality of dimensions, each dimension representing a defined characteristic. These characteristics may include sexual content, violence, and offensive language. Each of the dimensions is provided with an assigned rating value. This rating vector is then transferred or transmitted along with the asset to the end user's player device” (Buehl at 2:54-60), the “rating vector has a keyword and a magnitude value pair for each of the N dimensions. The vector is assigned to each asset by the asset producer or by an independent review and evaluation entity in accordance with established dimension definition criteria and magnitude definition criteria” (Buehl at 2:10-15) where “[t]he user programs into the filtering device a threshold N dimensional preference vector which has a magnitude value for each of the N dimensions” (Buehl at 2:25-27) and “[i]f any one dimensional value of the asset rating vector is greater than the corresponding preference dimensional value, the filtering device blocks the signal and prevents the digital transmission asset, videotape, CD, etc. from being processed further, thus preventing user access to the actual content of the asset, i.e. preventing the user from seeing or hearing the asset” (Buehl at 2:34-40) and this is to cure deficiencies indicated in media ratings such as the MPAA ratings (e.g., “G”, “PG”, “PG-13”, etc.) (Buehl at 1:11-26).
Further Chen, Protheroe, Desikan, and Dimitrova, as cited and combined earlier, indicates the text, image, and/or site information extraction (see at least Chen at 0016-0017, Desikan at 0024, Dimitrova at 0016-0018).
Agarwal et al. (U.S. Patent Application Publication No. 2005/0251399, hereinafter Agarwal) discloses “a system and a method for rating and/or approving documents such as image advertisements. In one exemplary embodiment, a method for rating a document comprising an image is provided. A document is received for distribution. Rating information associated with the document is received from one or more rating entities. At least one of the one or more rating entities comprises a processor to determine rating information associated with the image. The document is approved for distribution based on the rating information” (Agarwal at 0007) and where “An aggregate rating may comprise one or more distinct numerical scores (e.g., for different subject areas like sexuality and violence) and/or one or more verbal scores. A verbal score may be a word (e.g., essay) analysis of a feature of the document. For instance, an evaluator may provide a numerical score of {fraction (4/10)} in a "sexual content" category, and the evaluator may also verbally state that the document "includes a half-clothed woman with a sexually suggestive look in her eye." It may also comprise one or more binary scores (such as yes/no or flag/no-flag). For instance, if three of five evaluators flagged a document as pornographic in their content rating, the rating aggregation module 30 may flag the document as pornographic. The numerical scores may comprise one or more measures of a total rating in a particular subject area, and the numerical scores may also indicate other information about the various ratings aggregated. For instance, a score may comprise a mean in addition to a standard deviation of the mean. The aggregate rating may comprise a multidimensional vector coupled with verbal responses” (Agarwal at 0078).
At least both Cheung et al. (U.S. Patent Application Publication No. 2004/0054661, hereinafter Cheung) and Geshwind (U.S. Patent No. 7,080,392) also each describe categorizing or classifying a page based on a probability or likelihood of the page containing adult or particular MPAA-rating content by using machine learning (see Cheung at least at 0075, 0131; see Geshwind at 10:18 to 11:22).
As such, although all the various aspects of the claim appear to be disclosed and/or taught, it does not appear reasonable to combine the various references that would be required in order to arrive at the totality of what is claimed; therefore, the claims are indicated as allowable.
Response to Arguments
Applicant's arguments filed 22 December 2025 have been fully considered but they are not persuasive.
Applicant’s argument(s) are only regarding the 101 rejection, where the argument(s) recite guidelines (Remarks at 11), then repeat claim 1 (Id. at 12-14), then repeat the amended portion of the claims (Id. at 14), then argue support and repeat specification ¶ 0079 (Id.), and then provide a verbatim repeat (except the inclusion of the amendment indicating “determining … whether the uniform resource locator should be prioritized”) of the earlier allegations that “It is plainly apparent from the language of the claim that such a method is clearly not a) mathematical concepts (mathematical relationships, mathematical formulas or equations, and mathematical calculations); b) certain methods of organizing human activity; and c) mental processes. Therefore, the claims are directed to patentable subject matter.” (Remarks at 14) This is argument fails to comply with 37 CFR 1.111(b) because it amounts to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them.
Applicant then alleges, by merely again repeating the amended portion of the claims, that “even if the claims were directed to an abstract idea…. Applicants respectfully submit that this limitation, in combination with the other limitations of amended claim 1, amount to significantly more than the allegedly abstract idea identified by the Examiner” (Id. at 14-15). This is argument does not provide any reasoning as to why or how the amending of more mathematical operations would possibly or potentially “amount to significantly more” and therefore also fails to comply with 37 CFR 1.111(b) because it amounts to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them.
Therefore, the Examiner is not persuaded by Applicant's argument(s).
Conclusion
All claims are either identical to or patentably indistinct from claims in the application prior to the entry of the submission under 37 CFR 1.114 (that is, restriction would not be proper) and all claims could have been finally rejected on the grounds and art of record in the next Office action if they had been entered in the application prior to entry under 37 CFR 1.114. Accordingly, THIS ACTION IS MADE FINAL even though it is a first action after the filing of a request for continued examination and the submission under 37 CFR 1.114. See MPEP § 706.07(b). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Where the amended claims indicate a list of webpages at which a user is interested in placing advertising, and Protheroe, as included at the rejections above, indicates one form of this, the Examiner notes that there are apparently many forms or types for this sort of list, among them:
Herzog et al. (U.S. Patent Application Publication No. 2007/0233565, hereinafter Herzog) teaching area code local sites as a marketing channel for advertisers in the local area (see, e.g., Herzog at 0026-0030).
D’Angelo et al. (U.S. Patent Application Publication No. 2009/0070219, hereinafter D’Angelo) teaching targeting ads to members of a social network (see at least D’Angelo at the Abstract), where the list is the list of users matching the targeting criteria and the webpage and URL are, e.g., the wall or page for each user (see at least D’Angelo at 0056).
Microsoft, Internet Content Rating Associations Formed to Provide Global System for Protecting Children and Free Speech on the Internet, dated 12 May 1999, downloaded from https://news.microsoft.com/1999/05/12/internet-content-rating-association-formed-to-provide-global-system-for-protecting-children-and-free-speech-on-the-internet/ on 27 May 2020, indicating that the ICRA has been formed with various founding companies and “will accept additional memberships from companies or organizations willing to join in its efforts to build and manage an internationally acceptable online content rating system” (at p. 1).
Li et al. (U.S. Patent No. 8,032,923, hereinafter Li) indicates “ascertaining a rating for the at least the webpage” (Li at Abstract) and inappropriate content as motivating or causing rating requests for webpages (Li at least at column 1, lines 13-40).
Abbasi, A., Chen, H., and Salem, A. 2008. Sentiment analysis in multiple languages: Feature selection for opinion classification in Web forums. ACM Trans. Inform. Syst. 26, 3, Article 12 (June 2008), 34 pages. DOI = 10.1145/1361684.1361685 http://doi.acm.org/10.1145/1361684.1361685, discusses sentiment analysis and classification of Internet content.
M. Xu, Q. Li, X. Jiang and Y. Cui, "Evading User-Specific Offensive Web Pages via Large-Scale Collaborations," 2008 IEEE International Conference on Communications, Beijing, China, 2008, pp. 5721-5725, doi: 10.1109/ICC.2008.1071, downloaded from https://ieeexplore.ieee.org/abstract/document/4534107 on 28 July 2023, indicating that “This paper presents a collaborative rating system which can detect the polluted pages for users” (at 5721, § I, right column), where “polluted pages … examples include pages contaminated by pornography and violence” (Id. at left column), and “collaborating users leave a rating after they visit a web page. All ratings are correlated to predict potential offensive pages for each user” (at 5722, § 3.A., left column).
Hoashi et al., "Data collection for evaluating automatic filtering of hazardous WWW information," 1999 Internet Workshop. IWS99. (Cat. No.99EX385), Osaka, Japan, 1999, pp. 157-164, doi: 10.1109/IWS.1999.811008. downloaded from https://ieeexplore.ieee.org/abstract/document/811008 on 12 March 2024, indicating the examination of data based on self-rating, individual rating, and automatic filtering as was then-currently recognized, and showed “that the development of a high-performance automatic rating algorithm was necessary” (at 163), and the “following results of our analysis showed that a large number of such documents can be blocked by blocking high hierarchical pages” (at 163-164) and “We have also made a practical evaluation of automatic filtering using a simple linear classifying algorithm. Experiments on our data collection with this algorithm proved all the hypotheses presented in our data analysis“ (at 164).
Jin et al., Sensitive webpage classification for content advertising, published by Association for Computing Machinery, Proceedings of the 1st International Workshop on Data Mining and Audience Intelligence for Advertising, pp. 28-33, 12 August 2007, doi: 10.1145/1348599.1348604, downloaded 6 January 2026, indicating that “In this paper, we address one of the important problems in online advertising, i.e., how to detect whether a publisher webpage contains sensitive content and is appropriate for showing advertisement(s) on it. We take a webpage classification approach to solve this problem. First we design a unique sensitive content taxonomy. Then we adopt an iterative training data collection and classifier building approach, to build a hierarchical classifier which can classify webpages into one of the nodes in the sensitive content taxonomy. The experimental result show that using this approach, we are able to build a unique sensitive content classifier with decent accuracy while only requiring limited amount of human labeling effort” (at Abstract).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SCOTT D GARTLAND whose telephone number is (571)270-5501. The examiner can normally be reached on M-F 8:30 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kambiz Abdi can be reached on 571-272-6702. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free)? If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Scott D Gartland/
Primary Examiner, Art Unit 3685