Prosecution Insights
Last updated: April 19, 2026
Application No. 18/465,811

CLASSIFYING A DISCOMFORT LEVEL OF A USER WHEN INTERACTING WITH VIRTUAL REALITY (VR) CONTENT

Non-Final OA §102§103
Filed
Sep 12, 2023
Examiner
CHEN, YU
Art Unit
2613
Tech Center
2600 — Communications
Assignee
Sony Interactive Entertainment Inc.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
98%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
711 granted / 1052 resolved
+5.6% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
110 currently pending
Career history
1162
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§102 §103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/16/2025 has been entered. Response to Arguments Applicant’s arguments filed on 12/16/2025 have been fully considered but are moot because the arguments do not apply to any of the references being used in the current rejection. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1, 4-8, 11-15, 18-20 are rejected under 35 U.S.C. 102(A)(2) as being anticipated by Ramaprakash et al. (US Pub 2018/0088669 A1). As to claim 1, Ramaprakash discloses a method, comprising: executing an application to generate output data, wherein the output data is presented at a device to a user (Fig. 2, ¶0026, “The VR content 204 is presented to the user via the VR viewer 106 and/or the speaker(s) 108 communicatively coupled to the VR manager 114.”); receiving physiological feedback associated with from the user (Fig 2, ¶0019, “he VR HMD 102 can include other number(s) and/or type(s) of sensors 110 to collect neurological and/or physiological data from the user 104, such as electrooculography (EOG) sensors, galvanic skin response sensors, and/or electrocardiography (EKG) sensors.”); providing the output data and the physiological feedback as input to a model, wherein the model is trained to identify a classification associated with a likelihood of a physiological response when interacting with the application (Fig. 2, Fig. 3, Rule engine 222. ¶0032, “The rules engine 222 analyzes the video vector(s) 218 and/or the audio vector(s) 220 to determine whether the visual and/or audio parameter(s) of the VR content stream 216 correspond to known and/or learned seizure trigger(s). In analyzing the video vector(s) 218 and the audio vector(s) 220, the rules engine 222 of the illustrated example implements a machine learning algorithm that also considers other variables such as neurological data received from the user while viewing the VR content 204 (e.g., while viewing a video frame and/or prior video frames of the VR content 204 and/or associated therewith), user profile data such as age and gender of the user, previously analyzed VR content and/or neurological data (e.g., calibration data), and known and/or learned seizure triggers, as will be disclosed below.” ¶0060, “the rules engine 222 classifies VR content stream 216 (e.g., the video and/or audio portions). The classification can be used to identify seizure triggers in other VR content having similar visual and/or audio parameters and/or invoking similar neurological/physiological responses in users. The classification can also be used to preemptively warn VD HMD users and/or to preemptively modify the VR content across users. The prediction of the VR content stream 216 as likely to induce a seizure and/or other negative neurological/physiological response in a user and the corresponding visual and/or audio parameters of the VR content stream 216 are also used to update or refine the learning algorithm used by the predictor 316, as will be disclosed below.”): determining an output of the model based on the input, the output indicating the classification (¶0060, “the rules engine 222 classifies VR content stream 216 (e.g., the video and/or audio portions). The classification can be used to identify seizure triggers in other VR content having similar visual and/or audio parameters and/or invoking similar neurological/physiological responses in users. The classification can also be used to preemptively warn VD HMD users and/or to preemptively modify the VR content across users. The prediction of the VR content stream 216 as likely to induce a seizure and/or other negative neurological/physiological response in a user and the corresponding visual and/or audio parameters of the VR content stream 216 are also used to update or refine the learning algorithm used by the predictor 316, as will be disclosed below.”); generating, based on the classification, a notification indicating the likelihood of the physiological response (¶0060-0061, “(¶0060, “the rules engine 222 classifies VR content stream 216 (e.g., the video and/or audio portions). The classification can be used to identify seizure triggers in other VR content having similar visual and/or audio parameters and/or invoking similar neurological/physiological responses in users. The classification can also be used to preemptively warn VD HMD users and/or to preemptively modify the VR content across users. The prediction of the VR content stream 216 as likely to induce a seizure and/or other negative neurological/physiological response in a user and the corresponding visual and/or audio parameters of the VR content stream 216 are also used to update or refine the learning algorithm used by the predictor 316, as will be disclosed below.”);”). As to claim 4, claim 1 is incorporated and Ramaprakash discloses wherein the output data comprises a pattern of actions taken by the user during execution of the application (Ramaprakash, ¶0027, “the VR content 204 includes special effects, such as user hand gestures, that are generated as the user interacts with the VR content 204 in real-time.” ¶0043, “provides for continued monitoring and corrective action to address the potential for seizures” ¶0065, “the predictor 316 can determine that the user is experiencing a seizure (e.g., a PSE seizure) based on similarities between the patterns in the neurological/physiological data 226 identified by the neurological/physiological data analyzer 304 and the calibration neurological/physiological data 302 collected during prior seizures (e.g., for the user 104 of the VR HMD 102 of FIG. 1 or other users).”). As to claim 5, claim 1 is incorporated and Ramaprakash discloses wherein in the output data comprises a pattern of images in VR content of the application (Ramaprakash, Fig. 2, ¶0066, “the predictor 316 determines that the VR content stream 216 is not likely to induce a seizure (e.g., a PSE seizure) in the user. For example, if the video vector analyzer 312 does not detect any changes in the video vector(s) 218 for a current sequence of video frames of the VR content stream 216 under analysis relative to vectors for previously presented frames of the VR content 204 and the neurological/physiological data analyzer 304 does not detect any changes in the neurological/physiological data 226 collected from the user during exposure to the VR content 204, the predictor 316 may determine that the VR content stream 216 is not likely to induce a seizure (e.g., a PSE seizure) in the user.”). As to claim 6, claim 1 is incorporated and Ramaprakash discloses generating the notification comprises determining that the application is associated with a likelihood of a physiological response when the classification meets a threshold (Ramaprakash, ¶0024, “the seizure monitor 116 determines that the neurological data collected from the user 104 by the sensors 110 as the user 104 is exposed to the VR content generated by the VR manager 114 is indicative of one or more characteristics of an impending seizure (e.g., a PSE seizure) or an in-progress seizure (e.g., a PSE seizure).” “the seizure monitor 116 predicts that continued exposure to the VR content may induce a PSE seizure in the user 104 or continue or worsen the PSE seizure symptoms of the user 104.” “the VR manager 114 stops transmission of the VR content to the VR HMD 102 in response to the seizure monitor 116 detecting more than a threshold likelihood that the VR content is likely to induce a seizure or other negative neurological and/or physiological event in the user.”). As to claim 7, claim 1 is incorporated and Ramaprakash discloses wherein the classification is different than a classification of the application (Ramaprakash, ¶0024, “indicative of one or more characteristics of an impending seizure (e.g., a PSE seizure) or an in-progress seizure (e.g., a PSE seizure).” ¶0060, “the rules engine 222 classifies VR content stream 216 (e.g., the video and/or audio portions). The classification can be used to identify seizure triggers in other VR content having similar visual and/or audio parameters and/or invoking similar neurological/physiological responses in users. The classification can also be used to preemptively warn VD HMD users and/or to preemptively modify the VR content across users.” ¶0067, “the alert generator 320 automatically references previous classifications of the VR content 204 (or other VR content) stored in the database 300 to determine if the alert(s) 234 should be generated.”). As to claim 8, Ramaprakash discloses a computer system comprising: a memory configured to store computer-executable instructions; and a processor configured to access the memory and execute the computer-executable instructions to at least: execute an application to generate output data, wherein the output data is presented at a device to a user; receive physiological feedback associated with from the user; provide the output data and the phvsiological feedback as input to a model,wherein the model is trained to identify a classification associated with a likelihood of a physiological response when interacting with the application; determine an output of the model based on the input, the output indicating the classification; and generate, based on the classification, a notification indicating the likelihood of the physiological response (See claim 1 for detailed analysis.). As to claim 11, claim 8 is incorporated and Ramaprakash discloses the output data comprises a pattern of actions taken by the user during execution of the application (See claim 4 for detailed analysis.). As to claim 12, claim 8 is incorporated and Ramaprakash discloses the output data comprises a pattern of images in VR content of the application (See claim 5 for detailed analysis.). As to claim 13, claim 8 is incorporated and Ramaprakash discloses generating the notification comprises determining that the application is associated with a likelihood of a physiological response when the classification meets a threshold (See claim 6 for detailed analysis.). As to claim 14, claim 8 is incorporated and Ramaprakash discloses classification o is different than a classification of the application (See claim 7 for detailed analysis.). As to claim 15, Ramaprakash discloses one or more non-transitory computer-readable storage media storing computer-readable instructions that, when executed by one or more processors, cause a system to perform operations comprising: executing an application to generate output data, wherein the output data is presented at a device to a user; receiving physiological feedback associated with from the user; providing the output data and the phvsiological feedback as input to a model, wherein the model is trained to identify a classification associated with a likelihood of a physiological response when interacting with the application; determining an output of the model based on the input, the output indicating the classification; and generating, based on the classification, a notification indicating the likelihood of the physiological response (See claim 1 for detailed analysis.). As to claim 18, claim 15 is incorporated and Ramaprakash discloses the output data comprises a pattern of actions taken by the user during execution of the application or a pattern of images in VR content of the application (See claim 4-5 for detailed analysis.). As to claim 19, claim 8 is incorporated and Ramaprakash discloses generating the notification comprises determining that the application is associated with a likelihood of a physiological response when the classification meets a threshold (See claim 6 for detailed analysis.). As to claim 20, claim 15 is incorporated and Ramaprakash discloses classification o is different than a classification of the application (See claim 7 for detailed analysis.). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 2-3, 9-10, 16-17 are rejected under 35 U.S.C. 103 as being unpatentable over Ramaprakash et al. (US Pub 2018/0088669 A1) in view of Gentilin et al. (US Pub 2018/0024625 A1). As to claim 2, claim 1 is incorporated and Ramaprakash discloses providing a recommendation of one or more video (Ramaprakash, ¶0060, “The classification can also be used to preemptively warn VD HMD users and/or to preemptively modify the VR content across users.” ¶0061, “The content modification manager 318 analyzes the video vector(s) 218 and/or the audio vector(s) 220 of the VR content stream 216 to determine one or more modifications to the visual and/or audio parameters of the VR content stream 216. The content modification manager 318 determines a factor (e.g., an amount, a percentage) by which to adjust the visual and/or the audio parameters to reduce the likelihood of an occurrence of a seizure (e.g., a PSE seizure) and/or other negative neurological/physiological event in the user.” ¶0070, “the predictor 316 and/or the feedback analyzer 322 communicate with the content modification manager 318 and/or the alert generator 320 to determine corrective actions such as modifying the upcoming VR content stream(s), generating one or more alerts 234, and/or stopping transmission of the VR content 204”. ¶0072, “The refinement of the learning algorithm improves the ability of the rules engine 222 to predict whether or not upcoming VR content 204 (or other VR content) is likely to induce a seizure (e.g., a PSE seizure) and/or other neurological/physiological event in the user (or other users).”). Ramaprakash does not explicitly discloses games. However, video games are obvious choice for VR content. Gentilin teaches video game as VR content (Gentilin, ¶0003, “Through virtual reality applications, a user is capable of experiencing a fully modeled three dimensional world of a game or movie as if the user was actually in the game or movie” ¶0026, “a report may indicate the aggregated comfort level values that a large number of users have inputted at a particular level in a virtual reality game”). Ramaprakash and Gentilin are considered to be analogous art because all pertain to virtual reality contents. It would have been obvious before the effective filing date of the claimed invention to have modified Ramaprakash with the features of “video game as VR content” as taught by Gentilin. The claim would have been obvious because the substitution of one known element for another would have yielded predictable results to one of ordinary skill in the art at the time of the invention. As to claim 3, claim 1 is incorporated and Ramaprakash discloses generating a second notification that execution of a second application is associated with a likelihood of a physiological response based on the classification and a classification of the second application (Ramaprakash, ¶0067, “if the predictor 316 identifies the VR content stream 216 and/or portion(s) of the VR content stream 216 as including seizure trigger content, the predictor 316 sends a message to the alert generator 320 to generate one or more visual and/or audio alerts 234 warning the user (e.g., the user 104 of the VR HMD 102 of FIG. 1) that the VR content 204 may induce a PSE seizure. In some examples, the alert generator 320 automatically references previous classifications of the VR content 204 (or other VR content) stored in the database 300 to determine if the alert(s) 234 should be generated.” ¶0068, “the alert generator 320 generates the alert(s) 234 for transmission to one or more third parties designated by the user. Contact information for the one or more third parties can be received from one or more user inputs, via, for example the processing unit 112 of FIG. 2. The contact information for the one or more third parties can be stored in the database 300 of the rules engine 222 for reference by the alert generator 320.”). Ramaprakash’ contact information can be interpreted as a classification of the second application since the term is so broad. Gentilin also teaches a classification of the second application (Gentilin, ¶0062, “each record in database 520 has at least one key field that is associated and/or populated with identifiers of applications or VR environments, for example, particular application 120 and generated VR environment 130. Columns in these records specify comfort level values that are obtained via the first-level comfort prompt 504 and second-level comfort prompt 506, and metadata values 510, which may be received via analytics messages 512.”). Ramaprakash and Gentilin are considered to be analogous art because all pertain to virtual reality contents. It would have been obvious before the effective filing date of the claimed invention to have modified Ramaprakash with the features of “a classification of the second application.” as taught by Gentilin. The claim would have been obvious because the substitution of one known element for another would have yielded predictable results to one of ordinary skill in the art at the time of the invention. As to clam 9, claim 8 is incorporated and the combination of Ramaprakash and Gentilin discloses provide a recommendation of one or more video game titles suitable for the user based on the classification and one or more classifications of the one or more video game titles (See claim 2 for detailed analysis.). As to clam 10, claim 8 is incorporated and the combination of Ramaprakash and Gentilin discloses generate a second notification that execution of a second application is associated with a likelihood of a physiological response based on the classification of the user and a classification of the second application (See claim 3 for detailed analysis.). As to clam 16, claim 15 is incorporated and the combination of Ramaprakash and Gentilin discloses providing a recommendation of one or more video game titles suitable for the user based on the classification and one or more classifications of the one or more video game titles (See claim 2 for detailed analysis.). As to clam 17, claim 15 is incorporated and the combination of Ramaprakash and Gentilin discloses generating a second notification that execution of a second application is associated with a likelihood of a physiological response based on the classification of the user and a classification of the second application (See claim 3 for detailed analysis.). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YU CHEN/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Sep 12, 2023
Application Filed
May 05, 2025
Non-Final Rejection — §102, §103
Aug 06, 2025
Response Filed
Sep 13, 2025
Final Rejection — §102, §103
Dec 16, 2025
Request for Continued Examination
Jan 15, 2026
Response after Non-Final Action
Mar 11, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604497
THIN FILM TRANSISTOR AND ARRAY SUBSTRATE
2y 5m to grant Granted Apr 14, 2026
Patent 12597176
IMAGE GENERATOR AND METHOD OF IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12589481
TOOL ATTRIBUTE MANAGEMENT IN AUTOMATED TOOL CONTROL SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12588347
DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586265
LINE DRAWING METHOD, LINE DRAWING APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
98%
With Interview (+29.9%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month