Prosecution Insights
Last updated: April 18, 2026
Application No. 18/287,395

SYSTEM AND METHOD FOR PERFORMANCE IN A VIRTUAL REALITY ENVIRONMENT

Non-Final OA §101§102
Filed
Oct 18, 2023
Examiner
GALKA, LAWRENCE STEFAN
Art Unit
3715
Tech Center
3700 — Mechanical Engineering & Manufacturing
Assignee
Quill & Quaver Associates Pty Ltd.
OA Round
2 (Non-Final)
76%
Grant Probability
Favorable
2-3
OA Rounds
2y 11m
To Grant
95%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
649 granted / 851 resolved
+6.3% vs TC avg
Strong +19% interview lift
Without
With
+18.6%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
28 currently pending
Career history
879
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
35.3%
-4.7% vs TC avg
§102
25.6%
-14.4% vs TC avg
§112
18.3%
-21.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 851 resolved cases

Office Action

§101 §102
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment Applicant’s submission of a response on 11/23/09 has been received and considered. In the response, Applicant amended claims 1 and 9, canceled claim 14 and added claims 17-20. Therefore, claims 1-13 and 15-20 are pending. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-13 and 15-20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more. According to the specification, the invention relates to a virtual reality system that allows a player to interact with a virtual environment. Exemplary claims 1 and 9 include the following underlined claim elements: 1. A virtual reality system for a virtual performance, comprising: an immersive virtual reality environment defined by a performance framework corresponding to a scripted performance and comprising pre-defined visual data and pre-defined audio data, and at least one user input device and at least one user output device for a user in electronic communication with the immersive virtual reality environment, wherein the user interacts with the immersive virtual reality environment to insert into the system, user visual data or user audio data created within the performance framework by the user during immersion in the virtual reality environment 9. A method for interaction between a user and an immersive virtual reality environment, the method comprising: providing an immersive virtual reality environment defined by a performance framework and comprising pre-recorded visual data and pre-recorded audio data relating to performance component, the performance framework comprises one or more of a dramatic play, musical play, dance routine, choral piece, concert or ceremony, and providing at least one user input device and at least one user output device in electronic communication with the immersive virtual reality environment, the user communicating with the immersive virtual reality environment to insert visual performance data or pre-recorded audio performance data created by the user, during immersion in the virtual reality environment. The underlined claim elements are directed in user interaction in a virtual environment and applying virtual environment logic to user interactions which is the court enumerated abstract idea of certain methods of organizing human activities, following rules or instructions. The various dependent claims only further detail the abstract ideas or constitute insignificant extra solution activity and consequently are also considered abstract ideas. This judicial exception is not integrated into a practical application because the claims do not recite additional elements that would integrate the abstract idea into a practical application. The recited “input device”, “output device”, “non-transitory computer readable medium” and “non-transitory medium” amount to implementing the abstract idea on a general purpose computer, and/or do no more than generally link the use of a judicial exception to a particular technological environment or field of use. There is no improvement made to computer technology since the claims are directed to user interaction and application of virtual environment logic. This is not related to a long standing problem in computer technology. Additionally, there is no practical application as there is no particular machine that is used to implement the claim language and only generic computer components are used to perform the invention. Also, there is no transformation of the machine used in the application into a different state or thing. Lastly, the claims do not attempt to apply the abstract idea in a meaningful way beyond simply using a generic computer. The various dependent claims only further detail the abstract idea or are insignificant extra solution activity and also fail to rise significantly more than the abstract ideas. The additional element(s) or combination of elements in the claim(s) other than the abstract idea(s) per se including one or more of an input device, output device: (i) mere instructions to implement the idea on a computer, and/or (ii) recitation of generic computer structures that serves to perform generic computer functions that are well-understood, routine, and conventional activities previously known to the pertinent industry per the applicant’s description (Applicant’s specification Paragraphs [0107] - [0111]). Viewed as a whole, these additional claim element(s) do not provide meaningful limitation(s) to transform the abstract idea into a patent eligible application of the abstract idea such that the claim(s) amounts to significantly more than the abstract idea itself Therefore, the claims are directed to an abstract idea that lacks significantly more and thus is not patent eligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-4, 6-13 and 15-19 is/are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Yerli (pub. no. 20200404219). Regarding claim 1, Yerli discloses a virtual reality system for a virtual performance (“FIG. 1 shows a schematic representation of a system 100 according to an embodiment of the present invention. The system 100 enables a remote audience 102, including remote viewers 104, to view and interact with immersive content 106 via channels 108, for example, interaction devices 110 such as a TV set, a set-top box, PCs, game consoles, mobile phones, laptops, mobile game consoles, head-mounted displays, see-through devices, and the like, connected to a network 112. Preferably, in order to provide immersive experiences, interaction devices 110 include immersive reality devices, such as head-mounted displays or see-through devices (e.g., digital reality contact lenses). The remote audience 102 may be connected to one or more live events 114 that, once recorded, processed, rendered, and transmitted, are accessed by the remote audience 102 as immersive content 106 via any suitable digital network 112 such as the Internet that is accessed by the interaction devices 110. The live event 114 may be recorded using recording equipment 116 such as cameras and microphones, rendered via a renderer 118 and thereafter transmitted to the remote audience 102. In some embodiments, the recording equipment 116 includes Light Detection and Ranging (LIDAR) devices mounted thereon in order to provide precise distance and depth information of the live event 114. The remote audience 102 may interact with and enter feedback 120 on the immersive content 106 via channels 108 to at least one processing unit 122, either by a using the same interaction device 110 with which the user views the immersive content 106, or through another interaction device 110. According to an embodiment, the interaction devices 110 may be configured to perform lightweight computations and light rendering operations on immersive content 106 sent by a cloud server 126 comprising the at least one processing unit 122 and receiving the recording of the live event 114. The at least one processing unit 122 of the cloud server 126 collects all feedback 120 from each individual remote viewer 104 and generates processed data 124 that is thereafter used to update the one or more live events 114 or to update only the local view of remote viewers 104. Preferably, the processing unit 122 and the renderer 118 may be provided within one processing system, such as in a cloud server 126. The cloud servers 126 may be located in compute centers (not shown) located nearest to the live event and to the remote viewers 104. The at least one processing unit 122 may additionally process data from the live events before being rendered and sent as immersive content to the remote audience 102”, [0057] & [0058]; “The live event 114 may preferably be any typical live event that may be delivered, digitally distributed, or broadcasted to a remote audience 102, such as a sports match, a concert, a television show, a political speech, a party, a live conference, a seminar or other academic courses, board meetings, a concurrent and synchronous experienced game-session, live collaborative engagement of multiple users, digital auctioning, voting sessions, e-commerce and live shopping, amongst others. However, the live event 114 may also be a computer-generated interactive environment, entirely produced by one or more processing units, such as a virtual, interactive scene or game, wherein the recording means may include renderers 118 and the feedback 120 means may be provided by interfaces affecting the virtual scene or game. However, it is to be understood that the interactive environment may also comprise a combination thereof, resulting in mixing a real-world interactive environment with a computer-generated scenario”, [0065]), comprising: an immersive virtual reality environment defined by a performance framework corresponding to a scripted performance (“The immersive content 106 may include at least one of the following: image data, 3D geometries, video data, audio data, textual data, haptic data, or a combination thereof. In these embodiments, one or more parts of the immersive content 106 may include augmented reality (AR), virtual reality (VR), or mixed reality (MR) digital content. The AR digital content may include physical, real-world environment elements augmented by computer-generated sensory input such as sound, video, graphics, or GPS data. Augmentation techniques are typically performed in real-time and in semantic context with environmental elements, such as overlaying supplemental information or virtual objects in the real world. The AR digital content allows information about the surrounding real world of a remote viewer 104 or virtual objects overlay in the real world to become interactive and digitally manipulable. The VR digital content may include virtual elements that are used to replace the real world with a simulated one. The MR digital content may include a mixture of augmented physical, real-world environment elements interacting with virtual elements. For example, an MR experience may include a situation where cameras capture real humans. Subsequently, suitable computer software creates a 3D mesh of the humans that is then inserted into a virtual world and is able to interact with the real world”, [0062]; “The live event 114 may preferably be any typical live event that may be delivered, digitally distributed, or broadcasted to a remote audience 102, such as a sports match, a concert, a television show, a political speech, a party, a live conference, a seminar or other academic courses, board meetings, a concurrent and synchronous experienced game-session, live collaborative engagement of multiple users, digital auctioning, voting sessions, e-commerce and live shopping, amongst others. However, the live event 114 may also be a computer-generated interactive environment, entirely produced by one or more processing units, such as a virtual, interactive scene or game, wherein the recording means may include renderers 118 and the feedback 120 means may be provided by interfaces affecting the virtual scene or game. However, it is to be understood that the interactive environment may also comprise a combination thereof, resulting in mixing a real-world interactive environment with a computer-generated scenario”, [0065]) and comprising pre-defined visual data and pre-defined audio data (“Interaction elements 202 may refer to elements found within the interactive environments of live events and which users may be able to view and interact with via suitable channels (e.g., channels 108 in FIG. 1). Interaction elements 202 may refer to people 210, such as performers, event participants, live audience, etc.; objects 212, such as sports equipment, electronic devices, etc.; and live event stage 214, including walls, platforms, sports pitches, stadiums, performance stages or halls, as well as related settings such as lighting, sounds, and fog, amongst others”, [0074]), and at least one user input device and at least one user output device for a user in electronic communication with the immersive virtual reality environment ([0057] & [0058]; interaction devices interpreted to be input devices and output devices), wherein the user interacts with the immersive virtual reality environment to insert into the system, user visual data or user audio data created within the performance framework by the user during immersion in the virtual reality environment (“The incoming feedback data may preferably be processed into a presentable format. The format may differ depending on the live event 114, the available set of actions for the remote audience 102, as well as presentation capabilities of the interactive environment hosting the live event 114. Furthermore, the processed data 124 may represent a collective feedback 120 in which each remote viewer 104 may provide the same type of feedback 120 with a different value, such as a vote, or in which a chosen group of remote viewers 104 chosen randomly or using other methods, or in which selected individual remote viewers 104, may return personalized feedback 120. The processing unit 122 may collect all incoming feedback data from the remote audience 102. The processing unit 122 may process the received feedback 120 into processed data 124 and may determine whether to directly and instantaneously update, modify and/or affect the live event 114 and the resulting immersive content 106 or whether to only update, modify and/or affect the immersive content 106. After processing, the processing unit 122 may transfer the processed data 124 to the interactive environment hosting the live event 114 via a suitable network” [0067] & [0068]; “The interactive reality volumes and content databases may enable passive and active manipulation modes 204 between remote viewers and immersive content such that remote viewers may provide various types of feedback to a processing unit including instructions to whether or not update the live event. Generally, feedback enabled by manipulation modes 204 include video data, audio data, textual data, haptic data, or a combination thereof, as enabled by the interaction devices, the type of event, and feedback options for remote viewers. Passive manipulation 216 may involve providing feedback to a processing unit (e.g., processing unit 122 of FIG. 1) in a way that the feedback data and instructions trigger the processing unit to update the rendered immersive content as viewed by individual or group remote viewers. For example, passive manipulation 216 may include modifying views (e.g., zooming in and out, panning, rotating views, moving around within the live event stage 214, etc.), requesting further information from the interaction elements 202 as provided by the content database, and other actions which may only modify the view for remote viewers without necessarily affecting the live event for other viewers. Active manipulation 218 may involve providing feedback to the processing unit in a way that the feedback data includes instructions to update the live event as well as the rendered immersive content, which can have an effect on the immersive content for other viewers as well. For example, active manipulation 218 may include modifying settings of the live event stage 214, such as lighting intensity or colors, sounds, fog, and the like. However, active manipulation 218 may as well include inserting elements into the live event stage 214, such as inserting text data, image data, video data, or other forms of data that may deliver a message to be displayed within the interactive environment hosting the live event, such as submitting an opinion or actively affecting the course of a live event through a vote or other means”, [0076]). Regarding claim 2, Yerli discloses the performance framework includes multiple sub-frameworks that comprise user visual data or user audio data (“The processing unit within the cloud server (e.g. processing unit 122 and cloud server 126 of FIG. 1) may create interactive volumes on each of the interaction elements 202, enabling remote viewers to passively or actively manipulate each of the interaction elements 202. These interactive reality volumes may be created by distance interpolation methods applied on the interaction elements 202 to calculate the height and shape of each of the elements”, [0075]). Regarding claim 3, Yerli discloses the user visual data or user audio data created by the user during immersion in the virtual reality environment is inserted into a database accessible by software that runs the virtual reality system (“The client 406 may, for example, provide feedback via a feedback channel 408 within a network 410 to a live event backend 412 of the system. As a non-limiting example, and with specific reference to FIG. 2, the client 406 may provide feedback in the form of passive manipulation 216 or active manipulation 218 of interaction elements 202, including video data, audio data, textual data, haptic data, or combinations thereof. The feedback data may first be handled by a message-oriented middleware 414, where it may be provided to the subsequent layers of the system for authorization via a security component 416, storage via a persistent storage 418, streaming by a streaming component 420, and caching by a caching device 422”, [0093]). Regarding claim 3, Yerli discloses the user visual data or user audio data created by the user during immersion in the virtual reality environment is inserted into a database accessible by software that runs the virtual reality system (“The client 406 may, for example, provide feedback via a feedback channel 408 within a network 410 to a live event backend 412 of the system. As a non-limiting example, and with specific reference to FIG. 2, the client 406 may provide feedback in the form of passive manipulation 216 or active manipulation 218 of interaction elements 202, including video data, audio data, textual data, haptic data, or combinations thereof. The feedback data may first be handled by a message-oriented middleware 414, where it may be provided to the subsequent layers of the system for authorization via a security component 416, storage via a persistent storage 418, streaming by a streaming component 420, and caching by a caching device 422”, [0093]). Regarding claim 4, Yerli discloses the inserted user visual data or user audio data adds data to the database ([0093]). Regarding claim 6, Yerli discloses interaction with two or more users subsequently or simultaneously in respect of an immersive virtual reality environment defined by a performance framework (“The remote audience may furthermore include additional clients, such as “Client 2” to “Client 5”, that may participate and interact with the live event. However, it is to be understood that the present disclosure is not restricted to a remote audience of a particular size and number. Rather, the number of remote viewers and clients is not restricted and may only be limited by available processing resources of the system. Also, the creator and owner of a live event need not be part of the remote audience”, [0092]). Regarding claim 7, Yerli discloses the performance framework comprises a gaming component ([0065]; “Another level of direct participation and interaction according to an embodiment of the present invention may include involving the remote viewers as actual players directly participating at the game show. Either one, some, or all remote viewers may be enabled to participate as players and answer questions in order to win the game show, resulting in a mixed group of players playing the game together (or against each other) being locally and/or remotely present at the game show's location”, [0108]). Regarding claim 8, Yerli discloses at least one processing device; at least one system database; a network; the network connecting the processing device and the user input device and the user output device in electronic communication ([0057] & [0058]). Claims 9-11 and 13 are directed to the methods implemented by systems of claims 1-4 and 6 respectively and are rejected for the same reasons as claims 1-4 and 6 respectively. Claims 15 and 16 are directed to articles of manufacture that contain code that implements the system of claim 1 and are rejected for the same reasons as claim 1. Regarding claim 17, Yerli discloses the scripted performance comprises one or more of a dramatic play, musical play, dance routine, choral piece, concert or ceremony ([0065];“In yet another non-limiting exemplary embodiment of a system according to the present invention, the spectators of a theatre performance may be allowed to directly participate in the performance. The system could be used to connect a local audience that is directly present in an interactive environment, such as a theatre hall or a TV show studio, with a remote audience watching a recording of the performance at home using suitable devices. Thus, the system allows for a much bigger group of spectators to participate in the live event. For example, the performed piece or show may include a courtroom trial. While the actors play several roles, such as a judge, lawyers, and suspects, the remote and local audience may act as a jury and may vote. While most of the show may not require any direct participation, the audience will stay engaged since they know that they will have to provide feedback and therefore should pay attention. In addition to feedback related to the final vote, the viewers may also be required to provide feedback influencing the progression of the trial. Furthermore, the audience may also directly control stage effects, like lights, fog, or sounds”, [0106]). Regarding claim 18, Yerli discloses the performance framework comprises one or more of acts, scenes or movements ([0065]). Regarding claim 19, Yerli discloses the pre-defined audio data comprises one or more of dialogue lines, lyrics, or instrumental parts associated with the performance framework ([0076]). Response to Arguments Applicant’s arguments filed on January 19, 2026 have been fully considered but they are not entirely persuasive. On pages 6-9 Applicant argues that the amended claims overcome Breindel because the platonic object is not a scripted performance nor does it disclose pre-defined audio data. Examiner agrees. The rejections based on Breindel have been withdrawn. However, new rejections based on newly discovered prior art are detailed above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to LAWRENCE STEFAN GALKA whose telephone number is (571)270-1386. The examiner can normally be reached M-F 6-9 & 12-5. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Lewis can be reached at 571-272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /LAWRENCE S GALKA/Primary Examiner, Art Unit 3715
Read full office action

Prosecution Timeline

Oct 18, 2023
Application Filed
Sep 15, 2025
Non-Final Rejection — §101, §102
Jan 19, 2026
Response Filed
Apr 06, 2026
Non-Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12589294
SYSTEMS AND METHODS FOR ELECTRONIC GAME CONTROL WITH VOICE DETECTION AND AUDIO STREAM PROCESSING
2y 5m to grant Granted Mar 31, 2026
Patent 12576334
RECEPTION APPARATUS, TRANSMISSION APPARATUS, AND INFORMATION PROCESSING METHOD
2y 5m to grant Granted Mar 17, 2026
Patent 12569764
INPUT ANALYSIS AND CONTENT ALTERATION
2y 5m to grant Granted Mar 10, 2026
Patent 12569756
CLOUD APPLICATION-BASED DEVICE CONTROL METHOD AND APPARATUS, ELECTRONIC DEVICE AND READABLE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12573270
CONTROLLING A USER INTERFACE
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

2-3
Expected OA Rounds
76%
Grant Probability
95%
With Interview (+18.6%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 851 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month