Prosecution Insights
Last updated: April 19, 2026
Application No. 18/179,746

LEVERAGING EYE GESTURES TO ENHANCE GAME EXPERIENCE

Non-Final OA §103
Filed
Mar 07, 2023
Examiner
GUPTA, PARUL H
Art Unit
2627
Tech Center
2600 — Communications
Assignee
Sony Interactive Entertainment Inc.
OA Round
1 (Non-Final)
61%
Grant Probability
Moderate
1-2
OA Rounds
2y 11m
To Grant
94%
With Interview

Examiner Intelligence

Grants 61% of resolved cases
61%
Career Allow Rate
375 granted / 617 resolved
-1.2% vs TC avg
Strong +33% interview lift
Without
With
+33.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
14 currently pending
Career history
631
Total Applications
across all art units

Statute-Specific Performance

§101
1.9%
-38.1% vs TC avg
§103
71.3%
+31.3% vs TC avg
§102
15.2%
-24.8% vs TC avg
§112
6.4%
-33.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 617 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 13-22 are rejected under 35 U.S.C. 103 as being unpatentable over Gordon et al., US Patent Publication 2017/0344209 in view of Pardeshi et al., US Patent Publication 2021/0086089. Regarding independent claim 13, Gordon et al. teaches a method for providing assistance to a user viewing content (as given in paragraphs 0018-0019), comprising: tracking eye gestures of the user viewing the content (paragraph 0021 describes the eye tracking), the attributes used to determine an area within the content that the user is focusing on (paragraph 0020 explains the “user interface context corresponding to user focus at the time of the detected user state” is detected); analyzing the attributes associated with the eye gestures of the user to detect the user experiencing a type of eye strain that causes the user unable to discern the content (paragraphs 0018, 0050, 0062, and 0067 explain that the user is unable to discern the content to cause confusion as comprehension or attention falls below a threshold and paragraphs 0005 and 0018 attribute the lack of comprehension or attention to eye strain); and dynamically adjusting rendering attributes of a portion of the content presented in the area that the user is focusing on so as to make the content discernible to the user, a level of adjusting of the rendering attributes defined based on vision characteristics of the user viewing the content (paragraphs 0050-0053 explain how the different formats are used to correspond to the user state and the rendering is done to help the user focus to avoid lack of comprehension or attention). Gordon et al. does not specify that the eye gestures are analyzed to identify attributes associated with the eye gestures and wherein operations of the method are performed by an eye gesture processing module executing on a processor of a computing device. Pardeshi et al. teaches that the eye gestures are analyzed to identify attributes associated with the eye gestures (paragraph 0097 explains how the video data is analyzed to determine information about eye strain, changes in eye movements, and changes in blinking patterns) and wherein operations of the method are performed by an eye gesture processing module executing on a processor of a computing device (paragraph 0108 explains the processing modules used that are executed on a processor of a computing device). It would have been obvious to one of ordinary skill before the effective filing date to include the software and analysis taught by Pardeshi et al. to control the method of Gordon et al. The rationale to combine would be to help players improve their performance playing the game (abstract of Pardeshi et al.). Regarding claim 14, Pardeshi et al. teaches further the method of claim 13, wherein analyzing the attributes further includes, engaging a machine learning algorithm (as described in paragraph 0055 and 0076 and 0080-0082) to, analyze the attributes associated with the eye gestures (paragraph 0097 explains how the attributes of the eye movements or blinking patterns are analyzed); classify the attributes, the classifying used to tag the attributes with metadata (paragraphs 0094 and 0097 explain how metadata is used to classify the attributes to demonstrate eye strain and eye movements); generate an artificial intelligence (Al) model, the Al model trained using the metadata associated with the eye gestures (paragraphs 0103-0104 explain how the AI models and artificial neural networks are trained); and identify an output from the Al model that identifies a type of eye strain that correspond with the attributes of the eye gestures (paragraph 0096 explains how the different types of impairments can be determined by the trained neural network). Regarding claim 15, Gordon et al. teaches the method of claim 13, wherein dynamically adjusting rendering attributes includes magnifying or reducing a size of the content rendering in the portion of the content, a level of magnification or reduction defined based on visual characteristics of the user (paragraph 0053 explains how user status is used to help the user focus by using greater text size, which is a magnification). Regarding claim 16, Pardeshi et al. teaches further the method of claim 13, wherein the dynamically adjusting includes providing a text content overlay, when the content includes text content (paragraph 0054 explains the keywords that can be generated that is text). Regarding claim 17, Pardeshi et al. teaches further the method of claim 13, wherein tracking the eye gestures further includes, capturing facial features of the user as the user is interacting with the content, the facial features captured using a plurality of sensors and one or more image capturing devices available within a physical environment where the user is interacting with the content (paragraph 0097 explains how the video data is analyzed to determine eye movements and changes in position of the head and posture, which are all facial features of the user detected during use); and classifying the facial features to define metadata, the metadata used to generate and train an artificial intelligence (Al) model, output from the Al model used to define attributes associated with the eye gestures to dynamically adjust content being presented to the user (paragraph 0097 explains how the information that is detected is used to determine metadata used to determine fatigue that is used to adjust the content to the user through notifications or longer load times based on the output of the neural network as given in paragraph 0098). Regarding claim 18, Gordon et al. teaches the method of claim 17, wherein tracking the eye gestures further includes, responsively directing attention of the user to a second area that is different from the area that the user is focusing on, the content in the second area being presented using foveated visual rendering (paragraphs 0022-0023 explains how based on the player state, the view is adjusted in different formats where the formats are associated with different regions or windows and include display settings that include resolutions). Gordon et al. does not specify wherein tracking the eye gestures further includes, forwarding the attributes identified for the eye gestures through an application programming interface (API) to an interactive application providing the content, the interactive application processing the attributes of the eye gestures to identify a level of eye strain experienced by the user. Pardeshi et al. teaches further the method of claim 17, wherein tracking the eye gestures further includes, forwarding the attributes identified for the eye gestures through an application programming interface (API) to an interactive application providing the content (paragraph 0233 explains the use of the API that generates the content), the interactive application processing the attributes of the eye gestures to identify a level of eye strain experienced by the user (paragraphs 0096-0097 explain how the attributes are used to generate reaction times and a level of fatigue or eye strain through player state). It would have been obvious to one of ordinary skill before the effective filing date to include the software taught by Pardeshi et al. to control the method of Gordon et al. The rationale to combine would be to help players improve their performance playing the game (abstract of Pardeshi et al.). Regarding claim 19, Gordon et al. teaches the method of claim 18, wherein the interactive application is a video game and the content is game content (paragraph 0025 explains the use of the device with video games and video game content). Regarding claim 20, Pardeshi et al. teaches further the method of claim 13, wherein the computing device is a game console and the content is game content (paragraph 0321 explains that the system is a game console that is used in gaming to include game content), and wherein the eye gesture processing module is part of an operating system of the game console (paragraph 0097 explains how the eye gesture processing that determines eye strain is part of the video data of the game). Regarding claim 21, Gordon et al. teaches the method of claim 13, wherein the computing device is a cloud server of a cloud system and the eye gesture processing module is executed by the processor of the computing device in the cloud system (paragraphs 0080-0082 explains the use of cloud computing in the system). Regarding claim 22, Gordon et al. teaches the method of claim 13, wherein the eye gesture processing module is incorporated into hardware of the computing device (paragraph 0036 explains how the eye tracking sensors are incorporated into the device hardware). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. The closest prior art is made of record in the attached notice of references cited. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARUL H GUPTA whose telephone number is (571)272-5260. The examiner can normally be reached Monday through Friday, from 10 AM to 7 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached at 571-272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PARUL H GUPTA/Primary Examiner, Art Unit 2627
Read full office action

Prosecution Timeline

Mar 07, 2023
Application Filed
Aug 08, 2025
Non-Final Rejection — §103
Oct 16, 2025
Applicant Interview (Telephonic)
Oct 16, 2025
Examiner Interview Summary
Nov 10, 2025
Response after Non-Final Action
Nov 10, 2025
Response Filed
Dec 05, 2025
Interview Requested
Dec 11, 2025
Examiner Interview Summary
Dec 11, 2025
Applicant Interview (Telephonic)
Jan 21, 2026
Response Filed
Jan 21, 2026
Response after Non-Final Action

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12593588
DISPLAY SUBSTRATE
2y 5m to grant Granted Mar 31, 2026
Patent 12585342
WRIST-WORN DEVICE CONTROL METHOD, RELATED SYSTEM, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12578913
DISPLAY METHOD, ELECTRONIC DEVICE, AND SYSTEM
2y 5m to grant Granted Mar 17, 2026
Patent 12579953
DISPLAY APPARATUS, CONTROL MODULE THEREOF AND DRIVE METHOD THEREFOR
2y 5m to grant Granted Mar 17, 2026
Patent 12579941
PIXEL DRIVING CIRCUIT AND DISPLAY PANEL
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
61%
Grant Probability
94%
With Interview (+33.0%)
2y 11m
Median Time to Grant
Low
PTA Risk
Based on 617 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month