Prosecution Insights
Last updated: April 19, 2026
Application No. 18/923,609

METHODS, APPARATUSES AND COMPUTER PROGRAM PRODUCTS FOR GAZE-DRIVEN ADAPTIVE CONTENT GENERATION

Non-Final OA §103
Filed
Oct 22, 2024
Examiner
BRITTINGHAM, NATHANIEL P
Art Unit
2629
Tech Center
2600 — Communications
Assignee
Meta Platforms Inc.
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
92%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
340 granted / 461 resolved
+11.8% vs TC avg
Strong +18% interview lift
Without
With
+17.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
11 currently pending
Career history
472
Total Applications
across all art units

Statute-Specific Performance

§101
1.4%
-38.6% vs TC avg
§103
57.7%
+17.7% vs TC avg
§102
20.3%
-19.7% vs TC avg
§112
14.5%
-25.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 461 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 28, 2026 has been entered. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over applicant IDS cited Chen et al. (CN112507799A. See examiner Espacenet English translation for mapping which was attached to 06/04/2025 non-final rejection) in view of Everest (US 20250124798 A1). Regarding claims 1, 10, and 19, Chen teaches an apparatus (Figs. 1 and 2, [0001] show and teach augmented/mixed reality MR glasses) and a method comprising: one or more processors ([0041], “MR glasses local processor.” [0057], “The local processor of the MR glasses terminal is used to run a pre-trained feature database to identify objects of interest in the image”); and at least one memory or non-transitory computer readable medium storing instructions ([0058], “the local memory of the MR glasses”), that when executed by the one or more processors, cause the apparatus to: implement a trained interaction intention model comprising training data pre-trained, or trained in real-time based on captured content or prestored content associated with gazes of users, pupil dilations of the users, facial expressions of the users, muscle movements of the users, heart rates, or gaze dwell times of the users determined previously or in real time ([0172], “a trained interaction intention model Match, and then determine the user's degree of interest in the current gaze position and if the degree of interest is equal to/exceeds the threshold, step (3) is executed.” [0176], “in step (2) and step (3), by detecting eye movement, head movement, body movement, and sound to analyze the user's current interaction intention in real time, and whether there is an object of interest, and obtain through the above behavior analysis. The degree of interest is used as a condition for starting object detection and image recognition. For example, the eye tracking device and head tracking device of the system detect a sharp turn of the user's head, and at the same time, the eye gaze point is scanned for a long distance to reach the vicinity of a target object, and the gaze point is aligned with the target object after the correction saccade is detected. For this series of actions, the system judges that the user has a high degree of interest in the target object”); determine a gaze of an eye of a user or facial features of a face of the user associated with the user viewing, by the apparatus, items of content in an environment ([0112], “Tracked eye movements and geometric shapes can be used to discern the physical and/or emotional state of an individual in a continuous manner. When combined with information about real or virtual objects that the user is viewing”); determine, based on the determined gaze or the facial features, a state of the user or interest of the user ([0112, 0372], “Tracked eye movements and geometric shapes can be used to discern the physical and/or emotional state of an individual in a continuous manner. When combined with information about real or virtual objects that the user is viewing, it is possible to discern, for example, indications of object categories that cause surprise, attraction, or interest”); and determine, by implementing the trained interaction intention model based on determining that the gaze of the eye of the user or the facial features of the face of the user in relation to the gazes or the facial expressions of the training data that equals or exceeds a predetermined threshold ([0172, 0176], disclose this process. [0176] teaches a process detecting eye movement and behavior analysis to lean and determine a user’s degree of interest includes step (3) which [0172] teaches includes a step of, “determine the user's degree of interest in the current gaze position and if the degree of interest is equal to/exceeds the threshold.”) and based on the determined state of the user or the interest of the user, content to generate a modification of the items of content or to generate new content items associated with the of content ([0112, 0372], “When combined with information about real or virtual objects that the user is viewing, it is possible to discern, for example, indications of object categories that cause surprise, attraction, or interest, and such information can be used, for example, to customize the subsequent display of information.” Examiner notes “customizing subsequent display of information” based on the user’s “attraction or interest” corresponds to modifying the content based on a determined interest level of the user. [0172], “a trained interaction intention model Match, and then determine the user's degree of interest in the current gaze position and if the degree of interest is equal to/exceeds the threshold, step (3) is executed”). Chen does not explicitly state his trained interaction intention model is a machine learning model ([0040-0045]) or that the training data comprises a determined score. In an analogous art, Everest teaches an apparatus and method comprising a machine learning model (Figs. 1-2, machine vision system 100 and machine-leaning module 200 comprising a machine learning model 224) using training data that comprises a determined score ([0041] teaches the use of a score. [0134]. “For the purposes of this disclosure, “accuracy score,” is a numerical value concerning the accuracy of a machine-learning model. For example, a plurality of user feedback scores may be averaged to determine an accuracy score”). In an effort to expedite prosecution, examiner notes Everest also teaches the limitations: implement a machine learning model comprising training data pre-trained, or trained in real-time based on captured content or prestored content associated with gazes of users, pupil dilations of the users, facial expressions of the users, muscle movements of the users, heart rates, or gaze dwell times of the users determined previously or in real time ([0040-0041], “a machine learning model may be trained on a dataset including many video files of prior users observing content, with each file associated with one or more data points indicating how the user reacted to the content.” [0042], “such a model may allow a machine vision system to, for example, recognize a certain facial expression.” [0039], Machine learning model performs face detection and eye-tracking); determine, based on the determined gaze or the facial features, a state of the user or interest of the user ([0040], “a machine vision system may be used to determine a feature of the user's reaction to the educational content, such as the degree to which a user's reaction to educational content is positive, negative, and/or reflects specific emotions or states of mind such as confusion, frustration, understanding.” [0041], “In some embodiments, a training data set may include image and/or video data, associated with whether features such as confusion, frustration, understanding, interest, and boredom are present.”); determine, by implementing the machine learning model based on determining that the gaze of the eye of the user or the facial features of the face of the user in relation to the gazes or the facial expressions of the training data comprises a determined score that equals or exceeds a predetermined threshold ([0041], a machine learning model may be trained on a dataset including many video files of prior users observing content, with each file associated with one or more data points indicating how the user reacted to the content.” In some embodiments, a training data set may include image and/or video data, associated with a degree of positivity (or negativity) of a depicted user reaction.” [0045] teaches the machine model can detect human emotions and engagement with content. The art teaches the machine learning model can detect through user facial expressions their degree of interest including positivity and negativity (or confusion and boredom) which teaches a score that equals or exceeds a threshold. Also see [0129], machine learning model can be trained or retrained through comparisons to thresholds. [0134] teaches an accuracy score of the machine learning model) and based on the determined state of the user or the at least one interest of the user, content to generate a modification of the items of content or to generate new content items associated with the of content ([0045], “a model may be trained to detect emotions and/or reaction features from facial expressions and/or body language generally and may be adapted to detect emotions for a specific purpose…disgust may indicate that a feature of educational content may need to be removed, confusion may indicate that a more simplified form of educational content is best, engagement may indicate that the educational content is appropriate, and boredom may indicate that a more in depth form of the educational content is best”). Chen and Everest, individually and in combination, teach each and every limitation of the claims. Everest [0040-0045] teaches the benefit of training and implementing a machine learning model is that it can then determine a user’s degree of interest and comprehension in displayed content so that the model can change what is being displayed to keep the user engaged and learning. Therefore, it would have been obvious to one skilled in the art, before the effective filing date of the invention, to modify Chen with Everest such that a machine learning model detects and scores a user’s interest in displayed material as this allows a system to dynamical update with is being displayed to maintain the user’s engagement and understanding of the displayed content. Examiner finally calls applicant’s attention to Hunsmann et al. (US 20230129243 A1) [0093 and 0103], cited in the related prior art Conclusion section below, who teach the independent claim limitation(s) regarding a machine learning model for facial recognition which compares a combination of facial characteristics with a threshold score. Regarding claims 2, 11, and 20, Chen teaches providing the modification of the items of content or the new content items to a display or a user interface of the apparatus to enable the user to interact with, or view, the modification of the items of content or the one or more new content items ([0112, 0372], “When combined with information about real or virtual objects that the user is viewing, it is possible to discern, for example, indications of object categories that cause surprise, attraction, or interest, and such information can be used, for example, to customize the subsequent display of information”). Everest [0041, 0045] also teaches the claim limitations ([0045], “a model may be trained to detect emotions and/or reaction features from facial expressions and/or body language generally and may be adapted to detect emotions for a specific purpose…disgust may indicate that a feature of educational content may need to be removed, confusion may indicate that a more simplified form of educational content is best, engagement may indicate that the educational content is appropriate, and boredom may indicate that a more in depth form of the educational content is best”). Regarding claims 3 and 12, Chen teaches wherein the apparatus comprises at least one of an artificial reality device, a head-mounted display, or smart glasses (Figs. 1 and 2, [0001] show and teach augmented/mixed reality MR glasses). Everest also teaches the claim limitations ([0098], VR headset). Regarding claims 4 and 13, Chen teaches determining the gaze or the one or more facial features based on images, or video items captured by cameras of the apparatus (Abstract and [0020] teach MR glasses use of IR and color cameras. [0040] teaches the MR glasses “obtain the user's gaze point/gaze point coordinates in one or more front camera images.” [0365-0366], “one or more frames are extracted from the real-time view of the front camera to create additional content. A part of content that has been determined to be of interest (e.g., one or more frames) can be extracted from the content item, for example, as one or more images or short videos”). Everest also teaches the claim limitations ([0037, 0039], machine vision camera. [0107], Fig. 1 device input interface may include camera). Regarding claims 5 and 14, Chen teaches the images or the video items are associated with the user performing activities within, or associated with, the environment ([0020, 0025, 0069-0070] teaches the user is able to interact with objects within the environment through gaze command detection. [0246], “When the user selects the target through the gaze point and fully expresses the willingness to interact, switch the high-definition color camera to obtain the target object Partial images and partial images of objects are uploaded to the server for recognition”). Everest also teaches the claim limitations ([0037-0039], machine vision camera captures images of body language or facial expressions). Regarding claims 6 and 15, Chen teaches wherein: the at least one state comprises at least one of joy, sadness, alertness, fatigue, interest, disinterest of the user while the user is performing an activity within, or associated with, the environment ([0112, 0372], “When combined with information about real or virtual objects that the user is viewing, it is possible to discern, for example, indications of object categories that cause surprise, attraction, or interest, and such information can be used, for example, to customize the subsequent display of information”). Everest also teaches the claim limitations ([0041, 0045], “a model may be trained to detect emotions and/or reaction features from facial expressions and/or body language…such as disgust, confusion, boredom, frustration, understanding, interest, the [user] response is positive or negative”). Regarding claims 7 and 16, Chen teaches determining that the one or more facial features comprises one or more muscle movements of the face of the user ([0371], “Other information about the user's intention can be determined based on other factors, such as…facial muscle movement”). Everest also teaches the claim limitations ([0037], “reaction datum 128 may include physical movements such as body language or facial expressions which may be detected using a machine vision system”). Regarding claims 8 and 17, Chen teaches wherein the environment comprises a virtual reality environment or an augmented reality environment ([0004] teaches the use of MR/AR/VR smart glasses in an AR/VR environment). Everest also teaches the claim limitations ([0096, 0098], virtual world and augmented, and virtual reality space). Regarding claims 9 and 18, Chen teaches the method of claim 4, further comprising: determining heart rate of the user or blood pressure of the user based on the images or the video items of the user performing activities, and wherein the modification of the items of content or the new content items are based on the determined at least one heart or the at least one blood pressure ([0008], “the local image recognition program is started, and the local image recognition program is started when the human bioelectricity is obtained and the user's point of interest is recognized. The interest recognition conditions include: The heart rate and blood pressure human body biological information monitoring module at the same time detects the user’s emotional change data”). It would have been obvious to one skilled in the art, before the effective filing date of the invention to modify the mapped part of Chen with [0008] such that one heart rate of the user or at least one blood pressure of the user based on the one or more images or the one or more video items of the user performing one or more activities as this amounts to combining prior art elements according to known methods to yield predictable results, based on the above findings. See MPEP 2143, rationale (A). Response to Arguments Applicant's arguments filed December 12, 2025 are directed towards the amended subject matter. As detailed in the rejection above, Chen in view of Everest teaches all the limitations of the invention as currently claimed. Examiner also calls applicant attention to Hunsmann who teaches the amended limitations regarding a machine learning model which detects facial expressions and compares the expressions to a threshold score. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20230129243 A1, Hunsmann et al. [0093, 0103] teach a combination of facial characteristics may be compared with a threshold score. This relates to the independent claim limitations regarding a “score” Any inquiry concerning this communication or earlier communications from the examiner should be directed to NATHAN P BRITTINGHAM whose telephone number is (571)270-7865. The examiner can normally be reached Monday-Thursday, 10 AM - 6 PM, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Benjamin Lee can be reached at (571) 272-2963. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NATHAN P BRITTINGHAM/Examiner, Art Unit 2629
Read full office action

Prosecution Timeline

Oct 22, 2024
Application Filed
Jun 02, 2025
Non-Final Rejection — §103
Sep 02, 2025
Response Filed
Sep 30, 2025
Final Rejection — §103
Dec 29, 2025
Response after Non-Final Action
Jan 28, 2026
Request for Continued Examination
Jan 31, 2026
Response after Non-Final Action
Mar 13, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592176
ELECTRONIC APPARATUS AND CONTROL METHOD THEREOF FOR PROVIDING A PROGRESSIVE OR INTERLACED SCANNING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12592714
Apparatus Comprising Analog to Digital Converter Semiconductor Device
2y 5m to grant Granted Mar 31, 2026
Patent 12525195
SCAN DRIVING CIRCUIT, DISPLAY DEVICE AND METHOD OF OPERATING THE DISPLAY DEVICE
2y 5m to grant Granted Jan 13, 2026
Patent 12517578
TRANSPARENT DISPLAY APPARATUS WHEREIN AN IMAGE IS DISPLAYED TO A SECOND USER ON THE OPPOSITE SIDE OF THE DISPLAY
2y 5m to grant Granted Jan 06, 2026
Patent 12518678
GATE DRIVER AND DISPLAY DEVICE INCLUDING THE SAME
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
92%
With Interview (+17.7%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 461 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month