Prosecution Insights
Last updated: April 19, 2026
Application No. 17/316,168

METHODS AND APPARATUS TO GENERATE COMPUTER-TRAINED MACHINE LEARNING MODELS TO CORRECT COMPUTER-GENERATED ERRORS IN AUDIENCE DATA

Non-Final OA §101§103
Filed
May 10, 2021
Examiner
BEAN, GRIFFIN TANNER
Art Unit
2121
Tech Center
2100 — Computer Architecture & Software
Assignee
The Nielsen Company (US), LLC
OA Round
4 (Non-Final)
21%
Grant Probability
At Risk
4-5
OA Rounds
4y 4m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 21% of cases
21%
Career Allow Rate
4 granted / 19 resolved
-33.9% vs TC avg
Strong +28% interview lift
Without
With
+28.4%
Interview Lift
resolved cases with interview
Typical timeline
4y 4m
Avg Prosecution
45 currently pending
Career history
64
Total Applications
across all art units

Statute-Specific Performance

§101
37.7%
-2.3% vs TC avg
§103
40.4%
+0.4% vs TC avg
§102
11.2%
-28.8% vs TC avg
§112
9.7%
-30.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 19 resolved cases

Office Action

§101 §103
DETAILED ACTION This Action replaces the Non-Final Office Action mailed 07/01/2025. This Action is responsive to claims filed 06/02/2025. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 06/02/2025 has been entered. Status of the Claims Claims 1, 9, and 25 have been amended. Claim 33 is cancelled. Claim 37 is new. Claims 1, 3-7, 9, 11-12, 14-15, 25, 27-28, 30-31, and 34-37 are currently pending. Response to Arguments Applicant's arguments, see Pages 12-17, filed 06/02/2025, regarding the 35 U.S.C. 101 Rejection of claims 1, 3-7, 9, 11-12, 14-15, 25, 27-28, 30-31, and 33-36 have been fully considered but they are not persuasive. The Examiner contends that if the specific improvement, as per Applicant’s arguments Pages 15-16, comes from the merging of datasets to then train models, that the claims, under their broadest reasonable interpretation, amount to series of mental process steps, which are then applied in the training of a model, followed by the mental process step of selecting the model. For example, if the recited steps of “received…” and “obtaining…” are accepted as pre- or post-extra-solution activity (data retrieval or transmittal), then the “merging…” step is practically performed within the human mind or with the aid of pen and paper. Matching pairs of data points based on an identifier is practically performed within the human mind or with the aid of pen and paper. The “estimating…” step is also practically performed within the human mind or with the aid of pen and paper. The quantity of machine learning models generated is irrelevant to the steps therein being practically performed within the human mind (“selecting…features…” and “selecting…hyperparameters…”). The formulation of the models is practically performed within the human mind, the formulation of the dataset used to train (instructions to apply) the models is practically performed within the human mind, and the selection of the top-performing model is practically performed with the human mind. The Examiner contends that an improvement generated by these claim limitations inherently comes from a series of mental process steps representing an improvement to data pre-processing before training a machine learning model, rather than an improvement to the machine learning model itself. Per MPEP 2106.05(a), the specific improvement cannot come from the abstract idea. See the updated 35 U.S.C. 101 Rejection below. Applicant’s arguments, see Pages 17-18, filed 06/02/2025, regarding the 35 U.S.C. 103 Rejection of claims 1, 3-7, 9, 11-12, 14-15, 25, 27-28, 30-31, and 33-36 have been fully considered but they are not persuasive. The combination of Splaine and Zhan continues to read on the “generating a plurality of machine learning models…” step. Although Zhan fails to explicitly teach a number of models such as “one hundred machine learning models” Zhan does teach: “In this embodiment, model parameters indicate an associated relationship between input vectors and output vectors of the machine learning models. In this embodiment, a plurality of model parameter combinations maybe generated. For example, a plurality of model parameter combinations are generated by adjusting parameter values of the model parameters. The model parameters of a machine learning model, e.g., an LDA model (Latent Dirichlet Allocation, a document topic generation model), include an α parameter, a β parameter, an iteration number n, and a topic number K. Values of the α parameter and β parameter may be adjusted to generate a plurality of model parameter combinations.” ([0026]) which could reasonably result in hundreds of combinations. A combination of Splaine and Zhan continues to teach the subsequent “selecting…features”, “selecting…hyperparameters”, and “generating…” steps, given these steps are merely expanded in detail from the previously filed claims. The relevant citations to sections of Splaine and Zhan are listed again below. A combination of Splaine and Zhan continues to read on the “training…” step, given these steps are merely expanded in detail from the previously filed claims. The relevant citations to sections of Splaine and Zhan are listed again below. The combination of Splaine and Zhan continues to read on the “selecting…” step, with Splaine further teaching the impression monitor (containing the rules/ML Engine (Fig. 2)) being a separate location/server/storage. It would have been reasonable to one of ordinary skill in the art in combining Splaine and Zhan to store model or optimal model information in such a location as a this or database proprietor. Paragraphs [0081]-[0082] of Splaine continue to read on the “applying…” step. The relevant citations to sections of Splaine and Zhan are listed again below. Claim Rejections - 35 USC § 101 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1, 3-7, 9, 11-12, 14-15, 25, 27-28, 30-31, and 34-37 are rejected under 35 U.S.C. 101 because the claimed invention is directed to a judicial exception (i.e., a law of nature, a natural phenomenon, or an abstract idea) without significantly more; and because the claims as a whole, considering all claim elements both individually and in combination, do not amount to significantly more than the abstract idea, see Alice Corporation Pty. Ltd. v. CLS Bank International, et al, 573 U.S. (2014). In determining whether the claims are subject matter eligible, the Examiner applies the 2019 USPTO Patent Eligibility Guidelines. (2019 Revised Patent Subject Matter Eligibility Guidance, 84 Fed. Reg. 50, Jan. 7, 2019.) Step 1: Claims 1, 3-7, 34, and 37 recite a database proprietor computing system comprising a network interface, processor, and memory, which falls under the statutory category of a machine. Claims 9, 11-12, 14-15, and 35 recite a non-transitory computer readable storage medium comprising instructions, which falls under the statutory category of a manufacture. Claims 25, 27-28, 30-31, and 36 recite a method performed by a database proprietor computing system, which falls under the statutory category of a process. Step 2A – Prong 1: Claim 1 recites an abstract idea, law of nature, or natural phenomenon. The limitations of “merging the panel data with the database proprietor impression data by matching at least a portion of the user accounts with at least a portion of the panelists…”, “estimating an initial total audience size for the media;”, “generating a plurality of machine learning models…”, “selecting a set of features…”, “selecting a range of values of hyperparameters…”, “generating the machine learning model based on the selected set of features and the selected range of values…”, “selecting a first machine learning model from the trained plurality of machine learning models…”, and “adjusting the initial total audience size for the media based on the particular impression…” Under the broadest reasonable interpretation, these steps cover a mental process including an observation, evaluation, judgment or opinion that could be performed in the human mind or with the aid of pencil and paper. This limitations therefore fall within the mental process group. Step 2A – Prong 2: The additional elements of claim 1 do not integrate the abstract idea into a judicial exception. The claim recites the additional elements “computing system”, “a network interface”, “a processor”, “a memory comprising instructions”, “client devices”, “media”, “user accounts”, and “an application programming interface” which are recognized as a generic computer components recited at a high level of generality. Although they have and execute instructions to perform the abstract idea itself, this also does not serve to integrate the abstract idea into a practical application as it merely amounts to instructions to "apply it." (See MPEP 2106.04(d)(2) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). The additional elements of “impression data”, “misattribution error”, “panel data”, “impression identifiers”, “a plurality of features”, “a range of hyperparameters”, “a topology of a neural network”, “a learning rate”, “a batch size”, “a plurality of machine learning models”, “demographic data”, and “audience measurement panelists” are recognized as not being generic computer components, however they are recited at a high level of generality and found to generally linking the abstract idea to a particular technological environment or field of use. The additional elements of “received via the network interface…”, “logging database proprietor impression data…”, “obtaining panel data from an audience measurement computing system…” and “obtaining, from the user accounts, demographic data…” are found to be mere pre- or post-solution data retrieval or data transmittal steps (See MPEP 2106.05(g)). Furthermore, the limitations “training the plurality of machine learning models in parallel…” and “applying the first machine learning model to the database proprietor impression data to correct the computer-generated misattribution error…”, merely amounts to instructions to "apply it." (See MPEP 2106.04(f) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). Step 2B: The only limitation on the performance of the described method is limitations reciting “impression data”, “misattribution error”, “panel data”, “impression identifiers”, “a plurality of features”, “a range of hyperparameters”, “a topology of a neural network”, “a learning rate”, “a batch size”, “a plurality of machine learning models”, “demographic data”, and “audience measurement panelists”. These elements are insufficient to transform a judicial exception to a patentable invention because the recited element is considered insignificant extra-solution activity (generic computer system, processing resources, links the judicial exception to a particular, respective, technological environment). The claim thus recites computing components only at a high-level of generality such that it amounts to no more than mere instructions to apply the exception using generic computer components; mere instructions to apply an exception using a generic computer component cannot provide an inventive concept (see MPEP 2106.05(f)). Furthermore, as discussed above, the additional elements of “impression data”, “misattribution error”, “panel data”, “impression identifiers”, “a plurality of features”, “a range of hyperparameters”, “a topology of a neural network”, “a learning rate”, “a batch size”, “a plurality of machine learning models”, “demographic data”, and “audience measurement panelists” are recited at high levels of generality and were determined to generally link the abstract idea into a particular technological environment or field of use. These additional elements have been re-evaluated under step 2B and has also been found insufficient to provide significantly more. (See MPEP 2106.05(h) indicating generally linking an abstract idea to a particular technological environment does not amount to significantly more). The additional elements of “received via the network interface…”, “logging database proprietor impression data…”, “obtaining panel data from an audience measurement computing system…” and “obtaining, from the user accounts, demographic data…” are found to be mere data retrieval or data transmittal steps (See MPEP 2106.05(g)). Similarly, as discussed above “training the plurality of machine learning models in parallel…” and “applying the first machine learning model to the database proprietor impression data to correct the computer-generated misattribution error…”, was recited at a high level of generality and merely amounts to instructions to "apply it." This additional element has been re- evaluated under step 2B and has also been found insufficient to provide significantly more. (See MPEP 2106.05(A) indicating mere instructions to apply an abstract idea does not amount to integrating the abstract idea into a practical application). Taken alone or in ordered combination, these additional elements do not amount to significantly more than the above-identified abstract idea. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology. Their collective functions merely provide conventional computer implementation. For the reasons above, claim 1 is rejected as being directed to non-patentable subject matter under §101. This rejection applies equally to independent claims 9 and 25, which recite a non-transitory computer readable storage medium comprising instructions and a method performed by a database proprietor computing system, respectively. It is noted that claim 9 recites generic computer components (non-transitory computer readable storage medium, at least one processor) at high levels of generality and claim 25 recites generic computer components (memory, processor) at high levels of generality. Dependent Claims: The limitations of dependent claims but for those addressed below merely set forth further refinements of the abstract idea without changing the analysis already presented. Claim 3 (claims 11 and 27) recites the limitation “generating performance results for the plurality of machine learning models” This limitation has been evaluated under Step 2A Prong 2 and re-evaluated under Step 2B and found to be insignificant extra-solution activity (See MPEP 2106.05(g)(iii) first list). Claim 4 (claims 12 and 28) recites “comparing results from training the plurality of machine learning models to at least some of the demographic data of ones of the panelists who access media via panelist client devices;” which is a mental process step (comparing training results to at least some demographic data can be performed by the human mind). Claim 4 (claims 12 and 28) also recite refinements to the aforementioned insignificant extra-solution activity of claim 3. Claim 5 merely refines elements of the aforementioned performance elements. Claim 6 (claims 14 and 30) recites mental process steps “aggregating the performance results of the plurality of machine learning models” Claim 7 (claims 15 and 31) merely recites refinements to the aforementioned mental process steps of claim 1 by the mental process step of claim 6. Claim 34 (claims 35 and 36) recites “meter devices” generic computer component. Claim 37 recites refinements to data types. These additional elements have been found to generally link the abstract idea to a particular technology or field of use (MPEP 2106.05(h)). Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claim(s) 1, 3-7, 9, 11-12, 14-15, 25, 27-28, 30-31, and 34-36 is/are rejected under 35 U.S.C. 103 as being unpatentable over Splaine et al. (US 2014/0324545 A1), hereinafter Splaine, in further view of Zhan et al. (US 2020/0090073 A1), hereinafter Zhan. In regards to claim 1: The present invention claims: “A database proprietor computing system of a first cloud-based environment, comprising: a network interface; a processor; and memory comprising instructions that, when executed, cause the processor to perform operations comprising:” Splaine teaches a database proprietor with a computing system that receives communication via a network interface (Claim 1). “based on network communications received via the network interface from client devices that accessed media via user accounts established by registered users of a database proprietor, logging database proprietor impression data associated with the user accounts, logging database proprietor impression data associated with the user accounts, wherein the database proprietor impression data comprises a computer-generated misattribution error in which a particular impression of the media is attributed to demographic data of a first one of the registered users even though a second person, different from, and with different demographic data than, the first one of the registered users, viewed the media in connection with the impression;” See Splaine Claim 1 for receiving communication from client devices, See Splaine Figure 2 to various storage components containing logged media impressions, see Splaine [0028] teaching users registered with said database proprietors. Splaine [0081] also goes into detail regarding impression data from audience members, including “For example such a rule may specify an override of user-level preferred target partners when the user-level preferred target partner sends a number of indications that it does not have a registered user corresponding to the client device 202, 203 ( e.g., a different user on the client device 202, 203 begins using a different application having a different user ID in its partner cookie 216).” (mapping to “…wherein the database proprietor impression data comprises a computer-generated misattribution error in which a particular impression…” as the data may contain impressions assigned to one person, but are actually from a different user). “obtaining panel data from an audience measurement computing system of a second cloud-based environment via the network interface and via an application programming interface that connects the audience measurement computing system to the database proprietor computing system, the panel data comprising media impression data and corresponding demographic data for panelists of an audience measurement entity that were exposed to the media, wherein the media impression data of the panel data is collected independently of the database proprietor impression data;” Splaine teaches audience measurement entities using panel data ([0026], impression data separate from the proprietor, gathered individually), which is separate from the data obtained by database proprietors (“…second cloud-based environment…”), and a communication interface between said audience measurement entities and database proprietors ([0023]-[0032]). “merging the panel data with the database proprietor impression data by matching at least a portion of the user accounts with at least a portion of the panelists based on matches between impression identifiers collected in connection with the media impression data of the panel data and corresponding impression identifiers of the database proprietor impression data;” Splaine Figure 2 teaches matching IDs. Splaine [0032]-[0034] teach how the audience measurement entities take the impression and demographic data from the database proprietors and creates “an enormous, demographically accurate panel that results in accurate, reliable measurements of exposures to Internet content such as advertising and/or programming.” “estimating an initial total audience size for the media;” Splaine [0035] teaches to use of Gross Rating Points for audience size. “obtaining, from the user accounts, demographic data of a subset of the panelists that the merged data identifies as ones of the registered users of the database proprietor;” See above where Splaine teaches the means by which a data is connected to one of any particular database proprietors when they receive the data. “applying the first machine learning model to the database proprietor impression data to correct the computer-generated misattribution error, wherein correcting the computer-generated misattribution error comprises predicting the demographic data of the second person using the first machine learning model and replacing the demographic data of the first one of the registered users attributed to the particular impression of the media with the predicted demographic data of the second person;” Splaine teaches the goal of their system is an accurate, large dataset of impression data ([0034]). Splaine also teaches “In some examples, the rules/ML engine 230 specify when to override user-level preferred target partners with publisher or publisher/campaign level preferred target partners. For example such a rule may specify an override of user-level preferred target partners when the user-level preferred target partner sends a number of indications that it does not have a registered user corresponding to the client device 202, 203 ( e.g., a different user on the client device 202, 203 begins using a different application having a different user ID in its partner cookie 216).” (correcting the misattribution error). “adjusting the initial audience size for the media based on the particular impression attributed to the demographic data of the second person.” Splaine teaches the GRPs being affected by the correlation of the demographic data from the database proprietors and the audience measurement entities ([0041]). Splaine fails to specifically teach “generating a plurality of machine learning models, wherein the plurality of machine learning models comprises at least one hundred machine learning models, and wherein each machine learning model of the plurality of machine learning models is generated by: selecting a set of features from a plurality of features comprising self-declared demographics of the registered users of the database proprietor;” Zhan also teaches “In this embodiment, model parameters indicate an associated relationship between input vectors and output vectors of the machine learning models. In this embodiment, a plurality of model parameter combinations maybe generated. For example, a plurality of model parameter combinations are generated by adjusting parameter values of the model parameters. The model parameters of a machine learning model, e.g., an LDA model (Latent Dirichlet Allocation, a document topic generation model), include an α parameter, a β parameter, an iteration number n, and a topic number K. Values of the α parameter and β parameter may be adjusted to generate a plurality of model parameter combinations.” ([0026]), which could reasonably result in hundreds of combinations and reads on the “unique combination of features and ranges of values of hyperparameters” “selecting a range of values of hyperparameters, the hyperparameters comprising two or more of a topology of a neural network, a size of the neural network, a learning rate of the neural network, or a batch size of the neural network,” Zhan teaches “…this disclosure provides an apparatus for generating a machine learning model, including: a generation unit, configured to generate model parameter combinations…” (mapping to a plurality of features and a range of values of hyperparameters) ([0007]). “and generating the machine learning model based on the selected set of features and the selected range of values, wherein each machine learning model is generated based on a unique combination of features and ranges of values of hyperparameters;” Zhan teaches “…and generate machine learning models respectively corresponding to the model parameter combinations…” (mapping to generating a plurality of machine learning models…based on the selected set of features and the range of values of hyperparameters;) ([0007]). “training the plurality of machine learning models in parallel, wherein training the plurality of machine learning models in parallel comprises training each machine learning model based on the demographic data of the subset of the panelists used as a truth set of data, the set of features selected for the machine learning model, and the range of values of hyperparameters selected for the machine learning model, each trained machine learning model of the trained plurality of machine learning models being configured to predict correct demographic data of the registered users of the database proprietor;” Zhan teaches “…training the machine learning models in parallel respectively based on the training data;” ([0007]), and one may question the relevancy of the kind or type of data being manipulated in a system such as Zhan’s, under the broadest reasonable interpretation of the claim, Zhan fails to explicitly teach “…based on the demographic data of the subset of the panelists,” However, Splaine teaches “an example system 200 that may be used to associate exposure measurements with user demographic information based on demographics information distributed across user account records of different database proprietors ( e.g., web service providers).” and “In some examples, the example system 200 uses rules and machine learning classifiers ( e.g., based on an evolving set of empirical data) to determine a relatively best-suited partner that is likely to have demographics information for a user that triggered a beacon request.” ([0065], see also [0078]). “selecting a first machine learning model from the trained plurality of machine learning models, wherein the selecting is based on performance results for the trained plurality of machine learning models, and wherein the performance results are stored in a privacy-protected datastore of the audience measurement entity within the database proprietor computing system;” Zhan teaches “…where the model parameters indicate an associated relationship between input vectors and output vectors of the machine learning models; a division unit, configured to execute a dividing operation: dividing preset machine learning data into training data and validation data; a processing unit, configured to execute training and validation operations: training the machine learning models in parallel respectively based on the training data; and validating a learning accuracy of the trained machine learning models respectively based on the validation data to obtain validation scores, where the validation scores indicate a ratio of consistency between data types corresponding to the output vectors output by the machine leaning models based on the validation data and types of the validation data; and an execution unit, configured to execute a model generation operation: determining an optimal model parameter combination corresponding to a machine learning model to be generated based on the validation scores, and generating a machine learning model corresponding to the optimal model parameter combination.” (mapping to selecting a first machine learning model from the trained plurality of machine learning models.) ([0007]). Zhan highlights that their method of machine learning model and parameter optimization and selection address the large overhead incurred by selecting, testing, and training machine learning models. Splaine highlights the difficulty and potential inaccuracies of tracking user internet web usage with server log and the need to correctly attribute demographic information to users and their web usage ([0023]-[0026]). It would have been obvious to one of ordinary skill in the art at the time of the applicant’s filing to combine the machine learning model generation and selection system of Zhan to generate an optimal model for use in a system such as Splaine’s to improve the machine learning component of accurately attribute demographic user data to their internet usage. In regards to claim 3: The present invention claims: “The present invention claims: “The database proprietor computing system of claim 1, the operations further comprising generating performance results for the plurality of machine learning models.” Zhan teaches a validation unit and “validating a learning accuracy of the trained machine learning models respectively based on the validation data to obtain validation scores, where the validation scores indicate a ratio of consistency between data types corresponding to the output vectors output by the machine leaning models based on the validation data and types of the validation data;” ([0007], mapping validation scores to performance results). In regards to claim 4: The present invention claims: “The present invention claims “comparing results from training the plurality of machine learning models to at least some of the demographic data of ones of the panelists who access media via panelist client devices; and generating the performance results based on the comparison.” Zhan teaches a dividing unit that creates validation data out of the machine learning data. See rejection of claim 3 for where Zhan teaches the comparison of the machine learning output to the validation data. While Zhan fails to explicitly teach “the demographic data of ones of the panelists who access media via panelist client devices;” See the above rejection of claim 1 how a combination of Zhan and Splaine would be obvious. Such a combination would dictate the validation data being compared-to would be demographic data. In regards to claim 5: The present invention claims: “The present invention claims: “The database proprietor computing system of claim 4, wherein the performance results include at least one of model accuracy or demographic accuracy.” Zhan teaches “…and validating a learning accuracy of the trained machine learning models respectively based on the validation data to obtain validation scores,” ([0007]). In regards to claim 6: The present invention claims: “The present invention claims: “The database proprietor computing system of claim 4, the operations further comprising aggregating the performance results of the plurality of machine learning models.” See the above rejections for how Zhan teaches validation scores used to measure the performance of the generated models ([0007]). See also [0045]-[0047] how Zhan teaches multiple models being trained and validated in parallel, and having their validation scores averaged in a Reduce task before being returned (mapping to “aggregate the performance results”). In regards to claim 7: The present invention claims: “The database proprietor computing system of claim 6, wherein selecting the first machine learning model from the plurality of machine learning models comprises selecting the first machine learning model from the plurality of machine learning models based on the aggregated performance results.” Zhan teaches “and an execution unit, configured to execute a model generation operation: determining an optimal model parameter combination corresponding to a machine learning model to be generated based on the validation scores, and generating a machine learning model corresponding to the optimal model parameter combination.” ([0007]). In regards to claims 9, 11, 12, 14, and 15: The aforementioned claims share similar limitations with the ones rejected above, save for “A non-transitory computer readable storage medium comprising instructions…”, therefore these claims are similarly rejected. In regards to claims 25, 27, 28, 30, and 31: The aforementioned claims share similar limitations with the ones rejected above, save for “A method performed by a database computing system…comprising a processor and memory”, therefore these claims are similarly rejected. In regards to claim 34: The present invention claims: “The database proprietor computing system of claim 1, wherein the audience measurement computing system is configured to communicate over a network with a plurality of meter devices to obtain the panel data, wherein the plurality of meter devices are located at a plurality of households of the panelists.” Splaine teaches “To monitor browsing behavior and track activity of the partner cookie(s) 216, the user client device 202 is provided with a web client meter 222.” and “In the illustrated example, the web client meter 222 stores user IDs of the partner cookie(s) 216 and the panelist monitor cookie 218 in association with each logged HTTP request in the HTTP requests log 224.” ([0077]). In regards to claims 35 and 36: The aforementioned claims share similar limitations with claim 34, therefore these claims are similarly rejected. Claim(s) 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Splaine and Zhan as applied to claim 1 above, and further in view of Mayank Kejriwal (Domain Specific Knowledge Graph Construction, 2019), hereinafter Kejriwal. In regards to claim 37: The combination of Splaine and Zhan fail to explicitly teach “wherein the plurality of features further comprises (i) a number of top search result click entities and (ii) a number of top video watch entities, wherein each entity of the number of top search result click entities and the number of top video watch entities corresponds to a particular node in a knowledge graph maintained by the database proprietor, the knowledge graph comprising at least tens of millions of unique identifiers for all entities associated with search result clicks and videos watched.” Although Splaine teaches “The redirection initiates a communication session between the client accessing the tagged content and the database proprietor. The database proprietor ( e.g., Face book) can access any cookie it has set on the client to thereby identify the client based on the internal records of the database proprietor. In the event the client is a subscriber of the database proprietor, the database proprietor logs the content impression in association with the demographics data of the client and subsequently forwards the log to the audience measurement company.” ([0031]) and “In some examples, the impression monitor 132 could also update target partner sites based on user behavior. For example, such user behavior could be derived from analyzing cookie clickstream data corresponding to browsing activities associated with panelist monitor cookies (e.g., the panelist monitor cookie 218).” ([0081]), which could read on the features of claim 37, under the BRI of the claim, the combination of Splaine and Zhan fails to teach the data types or knowledge graph explicitly; however, Kejriwal teaches storing large amounts of search result related data pertaining to search engines or videos (Preface, first paragraph, mapping to data relevant to top search results, top video watch entities, or other large-scale, interconnected data entities) as knowledge graphs or domain-specific knowledge graphs. Sections 1.3 and 1.5 offer details that would be relevant to an indication that storing such data pertaining to search engines or videos in a knowledge graph would have been known to one skilled in the art at the time of the Applicant’s filing. Chapter 3 goes into details pertaining to entity resolution, further demonstration that unique identifiers would have been obvious to one skilled in the art at the time of the Applicant’s filing. Kejriwal highlights some of the benefit of domain-specific knowledge graphs (Preface), and the storing of similar data to the instant application in a knowledge graph for machine learning (Page 2). It would have been obvious to one of ordinary skill in the art the time of the Applicant’s filing to store large amounts of search- or video-related data such as this in a knowledge graph for use in a machine learning system such as the combination of Splaine and Zhan. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to GRIFFIN T BEAN whose telephone number is (703)756-1473. The examiner can normally be reached M - F 7:30 - 4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Li Zhen can be reached at (571) 272-3768. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /GRIFFIN TANNER BEAN/Examiner, Art Unit 2121 /Li B. Zhen/Supervisory Patent Examiner, Art Unit 2121
Read full office action

Prosecution Timeline

May 10, 2021
Application Filed
May 10, 2021
Response after Non-Final Action
Aug 09, 2024
Non-Final Rejection — §101, §103
Oct 02, 2024
Interview Requested
Oct 08, 2024
Applicant Interview (Telephonic)
Oct 08, 2024
Examiner Interview Summary
Oct 23, 2024
Response Filed
Feb 03, 2025
Final Rejection — §101, §103
Apr 08, 2025
Interview Requested
Apr 16, 2025
Applicant Interview (Telephonic)
Apr 16, 2025
Examiner Interview Summary
Jun 02, 2025
Request for Continued Examination
Jun 06, 2025
Response after Non-Final Action
Jun 26, 2025
Non-Final Rejection — §101, §103
Jan 29, 2026
Non-Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12424302
ACCELERATED MOLECULAR DYNAMICS SIMULATION METHOD ON A QUANTUM-CLASSICAL HYBRID COMPUTING SYSTEM
2y 5m to grant Granted Sep 23, 2025
Patent 12314861
SYSTEMS AND METHODS FOR SEMI-SUPERVISED LEARNING WITH CONTRASTIVE GRAPH REGULARIZATION
2y 5m to grant Granted May 27, 2025
Patent 12261947
LEARNING SYSTEM, LEARNING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Mar 25, 2025
Study what changed to get past this examiner. Based on 3 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
21%
Grant Probability
50%
With Interview (+28.4%)
4y 4m
Median Time to Grant
High
PTA Risk
Based on 19 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month