Prosecution Insights
Last updated: April 19, 2026
Application No. 18/524,493

SYSTEMS AND METHODS FOR DETERMINING USER INTEREST WHILE MAINTAINING PRIVACY

Final Rejection §103§112
Filed
Nov 30, 2023
Examiner
RIEGLER, PATRICK F
Art Unit
2171
Tech Center
2100 — Computer Architecture & Software
Assignee
Capital One Services LLC
OA Round
2 (Final)
55%
Grant Probability
Moderate
3-4
OA Rounds
4y 5m
To Grant
89%
With Interview

Examiner Intelligence

Grants 55% of resolved cases
55%
Career Allow Rate
189 granted / 346 resolved
At TC average
Strong +35% interview lift
Without
With
+34.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 5m
Avg Prosecution
36 currently pending
Career history
382
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
51.9%
+11.9% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
18.2%
-21.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 346 resolved cases

Office Action

§103 §112
DETAILED ACTION This FINAL action is in response to Application No. 18/524,493 filed 11/30/2025. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The amendment presented on 12/22/2025 which provides amendments to claims 1, 8, and 14, is hereby acknowledged. Claims 1-20 are currently pending. Response to Arguments Applicant’s arguments with respect to the prior art rejections have been considered, however, they are unpersuasive. Pertaining to claim 1, it appears Applicant is arguing that the amended language of “monitoring, by a software module executed locally on a user device, a plurality of user interactions with one or more items on one or more interfaces, the plurality of user interactions being exclusive of item level data of the one or more items” differentiates from the recitations in Cox that discloses monitoring user activity while the user is interacting with a user interface to identify activity indicative of a user interest in a location within the user interface. However, these “locations” are associated with “content elements” as described in Cox at [0025]. The Examiner submits the “locations/content elements” in Cox are equivalent to the one or more items on one or more interfaces. Additionally noted, is while the claim recites that user interactions are exclusive of “item level data”, the claim does not define what is in included in the user interactions. “One or more items” is interpretable as not modifying the previous scope if the user interaction is not supposed to have information identifying the items. In claim 4, the “item level data” is defined as associated with the “one or more interfaces”, and not “one or more items”. Pertaining to claim 8, Applicant does not discuss a particular prior art reference. The amendment necessitated the inclusion of Cox as an obviousness rejection. As previously established, both Cox and Dorai monitor user interactions with content in a user interface and determine a user interest level/score pertaining to the content. Cox additionally provides that the process of Figure 8 which monitors user interactions (step 802) and, when interest in a point is detect (step 804), stores interaction and information pertaining to a position/point within one or more interfaces. With Paragraph [0024] reciting “an amount of time that a user interacts with a particular user interface widget that indicates that a user spends more time gazing at a particular entry in a stream of entries displayed within a user interface”, it is arguable that the time gazing is subject to a comparison to a threshold time which in turn is construable as a score. Therefore, the detection of interest is based on a threshold score/time of the gaze. Paragraph [0039] states “gaze detector records the specific points identified by the user's gaze activity with the amount and duration of the gaze within points of interest log”. It is important to note the paragraph continues by stating “In addition, gaze detector 232 may identify target information within the user interface that is displayed at the gaze point and gaze region and store the identified content within points of interest log 112.” Thus, the inclusion of the target/content information in the log is written as optional during the process of Figure 8, therefore, Cox suggests first capturing user interaction information exclusive of item level data. Figure 9 is a distinct process for taking the identified points of interest in the log and capturing specific content (item) data on the local device. One of ordinary skill in the art would understand that Cox suggests identifying user interest by first capturing user interaction data devoid of item/content data based on a threshold interaction time/score, and then after, having a separate process identify the item/content data associated with the captured user interaction data. Indeed, Dorai additionally provides a machine-learning model to determine and output the user interest score and performs functions on content in response to the score exceeding a threshold. The Examiner maintains the combination of Cox and Dorai suggests using a machine learning model in the process of determining the user interest as outlined in the rejections below. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(d): (d) REFERENCE IN DEPENDENT FORMS.—Subject to subsection (e), a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. The following is a quotation of pre-AIA 35 U.S.C. 112, fourth paragraph: Subject to the following paragraph [i.e., the fifth paragraph of pre-AIA 35 U.S.C. 112], a claim in dependent form shall contain a reference to a claim previously set forth and then specify a further limitation of the subject matter claimed. A claim in dependent form shall be construed to incorporate by reference all the limitations of the claim to which it refers. Claim 12 is rejected under 35 U.S.C. 112(d) or pre-AIA 35 U.S.C. 112, 4th paragraph, as being of improper dependent form for failing to further limit the subject matter of the claim upon which it depends, or for failing to include all the limitations of the claim upon which it depends. Regarding claim 12, it does not appear that the claim further limits the subject matter of claim 8. The scope of both claims consists of triggering, by the one or more processors, a capturing of the item level data of the one or more interfaces based on the user interest score exceeding a user interest score threshold, wherein the item level data of the one or more interfaces is stored locally on the user device. Applicant may cancel the claim(s), amend the claim(s) to place the claim(s) in proper dependent form, rewrite the claim(s) in independent form, or present a sufficient showing that the dependent claim(s) complies with the statutory requirements. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3-6, 8-14 and 16-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cox et al. (US 2017/0153797 A1, hereinafter “Cox”), and further in view of Dorai-Raj et al. (US 2020/0226418 A1, hereinafter “Dorai”). Regarding claim 1, Cox teaches a computer-implemented method for determining user interest, comprising: monitoring, by a software module executed locally on a user device, a plurality of user interactions with one or more items on one or more interfaces, the plurality of user interactions being exclusive of item level data of the one or more items; More specifically, activity monitor 110 may monitor for user interaction behaviors that indicate the user has an interest in particular selections of content within a display interface (Cox, [0024]). User interactions with particular selections of content can include gazing and scrolling (Cox, [0023], [0038]). …identify user interest behavior patterns and output a user interest score; More specifically, model 116 may indicate a level of user interest for each of the content elements based on additional activity characteristics identified in association with the content elements in points of interest log 112, such as an amount of time that a user spends gazing at each content element (Cox, [0025]). outputting, …, the user interest score based on the plurality of user interactions; determining, by the one or more processors, that the user interest score exceeds a user interest score threshold; More specifically, model 116 may categorize the amount of time that a user spends gazing at each content element into threshold categories matching the amount of time, such as a high threshold for content elements gazed at for an amount of time greater than a high threshold time, a medium threshold for content elements gazed at for an amount of time greater than a low threshold time and less than a high threshold time, and a low threshold for content elements gazed at for an amount of time less than the low threshold time (Cox, [0025]). triggering, by the one or more processors, a capturing of the item level data of the one or more interfaces based on the user interest score exceeding the user interest score threshold, wherein the item level data of the one or more interfaces is stored locally on the user device. More specifically, Figure 9 depicts the process of capturing the content associated with the locations of interest and according to an interest threshold that occurs after the process of Figure 8 where interest levels are identified (Cox, [0076]-[0079]). However, Cox may not explicitly teach every aspect of providing, by one or more processors, the plurality of user interactions to a machine-learning model, wherein the machine-learning model has been trained, using one or more gathered and/or simulated sets of user interactions [to determine and output interest scores]. Dorai discloses generating a set of training data from received user interaction data, inputting the set of training data to a machine learning model to train the model, generating a set of user interest scores for the particular user that each indicate a user's interest in accessing information corresponding to a UI element of the application (Dorai, abstract). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Cox and Dorai that a method for determining user interest levels with content in a user interface would include using a machine learning model to determine interest levels. With Cox and Dorai disclosing models determining interest levels of a user by monitoring user interactions with content, and with Dorai additionally disclosing that it is a machine learning model performing the determination, one of ordinary skill in the art of implementing a method for determining user interest levels with content in a user interface would include using a machine learning model to determine interest levels because they have been shown to be a faster, more intelligent method for processing information. One would therefore be motivated to combine these teachings as in doing so would create this method for determining user interest levels with content in a user interface. Regarding claim 3, Cox and Dorai teach the computer-implemented method of claim 1, wherein the plurality of user interactions comprise one or more of interactions with one or more links embedded in the one or more interfaces or scrolling the one or more interfaces. More specifically, scrolling the user interface is criteria of determining user interests (Cox, [0023]; Dorai, [0020], [0040], [0102]) Regarding claim 4, Cox and Dorai teach the computer-implemented method of claim 1, wherein the item level data comprises a plurality of attributes of the one or more interfaces, text of the one or more interfaces, and/or images of the one or more interfaces. More specifically, the content of interest could be images and/or text (Cox, [0034], Dorai, [0027]). Regarding claim 5, Cox and Dorai teach the computer-implemented method of claim 1, further comprising associating the item level data of the one or more interfaces with a user point of interest. More specifically, each of the points of interest identified within interface is mapped to a separate content element displayed within the interface to form a model correlating each separate content element with a user interest of the particular user (Cox, abstract). Regarding claim 6, Cox and Dorai teach the computer-implemented method of claim 5, further comprising outputting, by the one or more processors, to an output device of the user, an element associated with the user point of interest. More specifically, based on the model, within a stream accessed for review by the particular user, a flow of a selection of entries of interest that meet the user interest is identified from among multiple entries in the stream. A separate selectable navigation breakpoint is selectively displayed with each of the selection of entries of interest within the stream, wherein selection of each separate selectable navigation breakpoint steps through the flow of the selection of entries of interest only (Cox, abstract). Regarding claim 8, Dorai teaches a computer-implemented method for training a machine-learning model for determining user interest, comprising: providing, by one or more processors, one or more gathered and/or simulated sets of user interactions to one or more machine-learning algorithms as one or more sets of training data; More specifically, receiving a set of user interaction data indicating a group of multiple different users' interactions with one or more UI elements, generating, from the received set of user interaction data, a set of user group training data, inputting the set of user group training data to the machine learning model (Dorai, [0006]. determining, by the one or more machine-learning algorithms, associations between the one or more gathered and/or simulated sets of user interactions and one or more user interest behavior patterns; More specifically, entities that can have a user interest score include metrics (Dorai, [0053]). A metric measures performance, behavior, and/or activity (Dorai, [0054]). UIM 110 receives user interaction data from user device 104 over network 102, and predicts user interest in digital components (Dorai, [0059]). modifying one or more of a layer, a weight, a synapse, or a node of a machine-learning model based on the associations between the one or more gathered and/or simulated sets of user interactions and one or more user interest behavior patterns; More specifically, training data generator 108 determines training example weights in one of several ways. In some implementations, training data generator 108 can weight each training example equally, and sample only data points within a predetermined period of time. In some implementations, training data generator 108 can using decay functions to weight training examples differently depending on how recent the example is (Dorai, [0074]). outputting, by the one or more processors, the machine-learning model, wherein the machine-learning model is trained to identify user interest behavior patterns based on a plurality of user interactions and output a user interest score based on the plurality of user interactions and the modified one or more of the layer, the weight, the synapse, or the node of the machine-learning model. More specifically, the UIM can then weight metrics more heavily in its user interest scoring model (Dorai, [0090]). The system generates, using the trained machine learning model, a set of user interest scores for the particular user, wherein each of the user interest score is indicative of the user's interest in accessing information corresponding to a UI element of the application (410). In this particular example, UIM 110 uses neural network 111a to generate user interest scores for each entity referenced by the UI element to provide an overall interest score for the UI element (Dorai, [0110]). However, Dorai may not explicitly teach every aspect of triggering, by the one or more processors, a capturing of item level data of one or more items based on a plurality of user interactions used to determine a user interest score exceeds a user interest score threshold, wherein the item level data is stored locally on a user device. Cox discloses an activity monitor 110 may monitor for user interaction behaviors that indicate the user has an interest in particular selections of content within a display interface (Cox, [0024]). Model 116 may indicate a level of user interest for each of the content elements based on additional activity characteristics identified in association with the content elements in points of interest log 112, such as an amount of time that a user spends gazing at each content element (Cox, [0025]). Model 116 may categorize the amount of time that a user spends gazing at each content element into threshold categories matching the amount of time, such as a high threshold for content elements gazed at for an amount of time greater than a high threshold time, a medium threshold for content elements gazed at for an amount of time greater than a low threshold time and less than a high threshold time, and a low threshold for content elements gazed at for an amount of time less than the low threshold time (Cox, [0025]). Figure 9 depicts the process of capturing the specific content on the local device associated with the position/points of interest within the user interface and logged according to an interest threshold that occurs after the process of Figure 8 where only interest levels in locations/points are identified (Cox, [0076]-[0079]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Dorai and Cox that a method for training a machine learning model with user interactions for determining user interest levels with content in a user interface would include wherein the plurality of user interactions is exclusive of item level data of the one or more interfaces. With Dorai and Cox disclosing models determining interest levels of a user by monitoring user interactions with content, and with Cox additionally disclosing the interest level in a position/point on a user interface is determined first, then the content/item detail for the interest point is determined, one of ordinary skill in the art of implementing a method for training a machine learning model with user interactions for determining user interest levels with content in a user interface would include wherein the plurality of user interactions is exclusive of item level data of the one or more interfaces because the system would benefit from not having to capture the content until interest is definitively determined, saving storage space and processing time. One would therefore be motivated to combine these teachings as in doing so would create this method for training a machine learning model with user interactions for determining user interest levels with content in a user interface. Regarding claim 9, Dorai and Cox teach the computer-implemented method of claim 8, wherein the one or more gathered and/or simulated sets of user interactions comprise one or more of interactions with one or more links embedded in one or more interfaces or scrolling the one or more interfaces. More specifically, user interaction data can come in other forms, such as event data (e.g., application listeners track events such as mouse clicks, scrolling) (Dorai, [0040]). The action of scrolling such that content item 204c is within viewport 202 for at least a predetermined period of time can be considered a positive indication of interest in content item 204c (Dorai, [0102]). Regarding claim 10, Dorai teaches the computer-implemented method of claim 8, further comprising monitoring, by a software module stored locally on a user device, the plurality of user interactions with one or more interfaces, wherein the plurality of user interactions is exclusive of item level data of the one or more interfaces. More specifically, Figure 9 depicts the process of capturing the specific content on the local device associated with the position/points of interest within the user interface and logged according to an interest threshold that occurs after the process of Figure 8 where only interest levels in locations/points are identified (Cox, [0076]-[0079]). Regarding claim 11, Dorai and Cox teach the computer-implemented method of claim 10, further comprising: providing, by one or more processors, the plurality of user interactions to the machine-learning model; and outputting, by the machine-learning model, the user interest score based on the plurality of user interactions. More specifically, the UIM can then weight metrics more heavily in its user interest scoring model (Dorai, [0090]). The system generates, using the trained machine learning model, a set of user interest scores for the particular user, wherein each of the user interest score is indicative of the user's interest in accessing information corresponding to a UI element of the application (410). In this particular example, UIM 110 uses neural network 111a to generate user interest scores for each entity referenced by the UI element to provide an overall interest score for the UI element (Dorai, [0110]). Additionally, model 116 may indicate a level of user interest for each of the content elements based on additional activity characteristics identified in association with the content elements in points of interest log 112, such as an amount of time that a user spends gazing at each content element (Cox, [0025]). Regarding claim 12, Dorai and Cox teach the computer-implemented method of claim 11, further comprising triggering, by the one or more processors, a capturing of the item level data of the one or more interfaces based on the user interest score exceeding a user interest score threshold, wherein the item level data of the one or more interfaces is stored locally on the user device. More specifically, model 116 may categorize the amount of time that a user spends gazing at each content element into threshold categories matching the amount of time, such as a high threshold for content elements gazed at for an amount of time greater than a high threshold time, a medium threshold for content elements gazed at for an amount of time greater than a low threshold time and less than a high threshold time, and a low threshold for content elements gazed at for an amount of time less than the low threshold time (Cox, [0025]). Regarding claim 13, Dorai and Cox teach the computer-implemented method of claim 8, further comprising gathering, by a second machine-learning model, the one or more gathered and/or simulated sets of user interactions. More specifically it would be obvious that more than one model can be involved as suggested by Cox at [0010], [0076], [0108] is order to better structure the allocation of functions. Regarding claims 14, 16-19, these claims recite the system that performs the steps of the method of claims 1, 3-6, therefore, the same rationale of rejection is applicable. Claim(s) 2 and 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cox and Dorai, and further in view of Hewett et al. (US 2015/0128021 A1, hereinafter “Hewett”). Regarding claim 2, Cox and Dorai teach the computer-implemented method of claim 1, however, may not explicitly teach every aspect of wherein the software module executed locally on the user device is one of an extension or a plugin. Hewett discloses that a browser plugin may be arranged to monitor one or more actions performed by a user. In particular, a browser plugin may be arranged to track if a user interaction with a web document and/or one or more portions of content-of-interest located in the web browser (Hewett, [0093]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Cox and Dorai with Hewett that a method for determining user interest levels with content in a user interface would include using a web browser plug in for monitoring of user interactions. With Cox, Dorai, and Hewett disclosing models determining interest levels of a user by monitoring user interactions with content from a web site, and with Hewett additionally disclosing using a web browser plug in, one of ordinary skill in the art of implementing a method for determining user interest levels with content in a user interface would include using a web browser plug in for monitoring of user interactions in order to add the capability of monitoring user interactions to web browsers that may not have the native functionality. One would therefore be motivated to combine these teachings as in doing so would create this method for determining user interest levels with content in a user interface. Regarding claim 15, this claim recites the system that performs the steps of the method of claim 2, therefore, the same rationale of rejection is applicable. Claim(s) 7 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Cox and Dorai, and further in view of Jung et al. (US 2021/0144445 A1, hereinafter “Jung”). Regarding claim 7, Cox and Dorai teach the computer-implemented method of claim 1, however, may not explicitly teach every aspect of wherein the item level data of the one or more interfaces is captured for a predetermined amount of time. Jung discloses providing content of interest within broadcast media (Jung, abstract). The display apparatus may record a content of interest for a pre-set time and store the recorded content of interest (Jung, [0052]). It would have been obvious to one of ordinary skill in the art before the effective filling date of the claimed invention given the teachings of Cox and Dorai with Jung that a method for determining user interest levels with content in a user interface would include capturing the content for a predetermined period of time. With Cox, Dorai, and Jung disclosing determining content of interest for a user, and with Jung additionally disclosing capturing the content for a predetermined period of time, one of ordinary skill in the art of implementing a method for determining user interest levels with content in a user interface would include capturing the content for a predetermined period of time in order to provide efficient storage management where capturing of content of interest could be continuous. One would therefore be motivated to combine these teachings as in doing so would create this method for determining user interest levels with content in a user interface. Regarding claim 20, this claim recites the system that performs the steps of the method of claim 7, therefore, the same rationale of rejection is applicable. Pertinent Prior Art The prior art made of record on form PTO-892 and not relied upon is considered pertinent to applicant's disclosure. Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. Lee (US 2015/0089520 A1) – determining a user’s level of interest in content by monitoring user interactions using machine learning. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PATRICK F RIEGLER whose telephone number is (571)270-3625. The examiner can normally be reached M-F 9:30am-6:00pm, ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kieu Vu can be reached at (571) 272-4057. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PATRICK F RIEGLER/ Primary Examiner, Art Unit 2171
Read full office action

Prosecution Timeline

Nov 30, 2023
Application Filed
Sep 23, 2025
Non-Final Rejection — §103, §112
Nov 17, 2025
Examiner Interview Summary
Nov 17, 2025
Applicant Interview (Telephonic)
Dec 22, 2025
Response Filed
Jan 10, 2026
Final Rejection — §103, §112
Mar 18, 2026
Applicant Interview (Telephonic)
Mar 18, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12547824
USER INTERFACE DATA ANALYZER HIGHLIGHTER
2y 5m to grant Granted Feb 10, 2026
Patent 12542869
Video Conference Apparatus, Video Conference Method and Computer Program Using a Spatial Virtual Reality Environment
2y 5m to grant Granted Feb 03, 2026
Patent 12535935
SYSTEMS AND METHODS FOR ANNOTATION PANELS
2y 5m to grant Granted Jan 27, 2026
Patent 12505140
AN INFORMATION INTERACTION VIA A MULTIMEDIA CONFERENCE
2y 5m to grant Granted Dec 23, 2025
Patent 12500984
NOTIFICATION SYSTEM NOTIFYING USER OF MESSAGE, CONTROL METHOD THEREFOR, AND STORAGE MEDIUM STORING CONTROL PROGRAM THEREFOR
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
55%
Grant Probability
89%
With Interview (+34.6%)
4y 5m
Median Time to Grant
Moderate
PTA Risk
Based on 346 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month