DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
The Amendment filed December 18th, 2025 has been entered. Claims 1-9, 47, 51-58, and 101-102 are pending in the application. Claims 99 and 100 are cancelled. Applicant’s amendments to the Claims 1, 47, and 57 have overcome the rejections previously set forth in the Final Office Action mailed June 20th, 2025. A further search has been performed to address the material amended in the aforementioned claims.
Response to Arguments
Applicant’s arguments with respect to claims 1-9, 47, 51-58, and 99-100 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Newly found references Leung (WO 2017004433 A1), Willette (US 10864447 B1), Shen (US 20210247737 A1), and XR Collaboratory (NPL: Michael Kass - Computer Vision and the Metaverse (CV4ARVR 2022)) were used for the claim limitations.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1, 5, 6, 7, 8, 47, 51, 55, 56, and 57 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Leung (WO 2017004433 A1).
Regarding claim 1:
Leung teaches:
A method comprising:
receiving at least one of an image of a user engaged in an extended reality (XR) session (Leung: some embodiments may obtain images of participants' eyes (broadcasters, players, commentators and/or spectators) captured during game play or broadcast, for example images captured by cameras attached to or integrated with […] virtual reality (VR) headsets [0125]; see Note 1A), a video of the user, an audio file of the user, bioinformation of the user, or a movement of the user (Leung: The various participant inputs may include one or more of, but are not limited to: audio or voice inputs such as in-game vocal communications or broadcast vocal channels; video or image inputs (e.g., video or images of a participant's facial expressions or eyes) [0064]);
analyzing the at least one of the image (Leung: The images may be analyzed, for example using techniques that detect emotions or other states via tracking and analysis of eye movements, blinking, dilation, and so on, [0125]), the video, the audio file (Leung: In some embodiments, participant information obtained for or with the audio input signals may be used when analyzing the audio input signals to determine information [0152]), the bioinformation, or the movement;
determining an emotional state of the user based on the analyzing (Leung: the information determined from the analysis of the participant audio inputs may, for example, indicate an emotional state or states (e.g., excitement, stress, fear, shock, surprise, amusement, etc.) of individual participants [0153]);
identifying a causal event based on the determining (Leung: Various emotions such as surprise, fear, happiness, intense concentration, and so on may be detected that may be correlated to in-game events (victories, defeats, startling in-game events, etc.) or to broadcast events (e.g., the broadcaster spilled a drink on his keyboard, fell out of his chair, etc.). [0148]);
generating session information during the XR session (Leung: Spectating system 100 may generate broadcast content 126 for the broadcast 142 based at least in part on game metadata 124A, [0101]);
in response to the identifying of the causal event, associating the session information generated during the XR session with the determined emotional state of the user (Leung: Various emotions such as surprise, fear, happiness, intense concentration, and so on may be detected that may be correlated to in-game events (victories, defeats, startling in-game events, etc.) or to broadcast events (e.g., the broadcaster spilled a drink on his keyboard, fell out of his chair, etc.). [0148]); and
generating a physical artifact (Leung: the spectating system may provide, or may provide access to, a "print on demand" service whereby 3D printing technology may be used to print physical objects based on input designs or specifications of game-related objects or items [0175], the game related content may include virtual game items or objects (e.g., digital representation of physical objects such as in game gear, clothing, weapons, characters, avatars, paragraph 0175) based on at least a portion of the session information generated during the XR session (Leung: The designs or specifications for the objects may be obtained from the game systems in the game metadata, or may be otherwise obtained [0175]; see Note 1B) and associated with the determined emotional state of the user (Leung: the game metadata and/or broadcast metadata may indicate an emotion or emotional state (e.g., stress, excitement, enger, sadness, happiness, frustration, etc) for one or more of the players, and the player’s avatars, or online characters [0176]).
Note 1A: The Examiner understands extended reality to be an umbrella term which encompasses technologies such as augmented reality (AR), virtual reality (VR), mixed reality (MR), and the like.
Note 1B: In the remarks filed December 18th, 2025, Applicant states: “For example, session information includes "audiovisual information, random number seeds, player actions, NPCs actions, and the like" or, in another example, "audiovisual information and 3D models ... captured using a scene descriptor format such as universal scene descriptor (USD), GL transmission format binary file (GLB), GL transmission file (glTF), immersive technology media format (ITMF), ORBX, and the like" (paragraphs [0040]-[0041]).”
Leung teaches: “The broadcast content may include UI elements and/or overlays representing or corresponding to game-related content such as virtual game items or objects (e.g., digital representations of physical objects) such as in-game gear, clothing, weapons, characters, avatars, powers, and so on, physical items such as physical representations of virtual objects from within the games (e.g., physical swords, action figures, toys, etc.), and game-related physical merchandise such as t-shirts or hats, and may also include UI elements and/or overlays representing or describing game events and/or game states as indicated in the game metadata” [0106] (emphasis added).
Leung teaches that game events include player actions: “Examples of game events may include, but are not limited to: particular achievements by particular players or teams of players …” [0115]. Therefore, the Examiner interprets the broadcast content of Leung to be analogous to the claimed session information.
As cited above, Leung also teaches that the broadcast content includes game-related physical merchandise generated by the spectating system. Therefore, the printed physical objects are broadcast content and are therefore based on at least a portion of the session information generated during the XR session.
Regarding claim 5:
Leung teaches:
The method of claim 1, wherein:
the receiving includes the image, the video (Leung: video or image inputs (e.g., video or images of a participant's facial expressions or eyes) [0064]), the audio file (Leung: audio or voice inputs such as in-game vocal communications or broadcast vocal channels [0064]), the bioinformation (Leung: biometric inputs from the players [0064]), the movement (Leung: participant inputs 465 may also include inputs from input devices and technologies such as […] motion tracking systems [0127]), and
the analyzing includes the image (Leung: The images may be analyzed [0123]), the video (Leung: the spectating system may obtain and analyze various inputs (e.g., […] video [0107]), the audio file (Leung: analysis of the participant audio [0123]), the bioinformation (Leung: some embodiments may obtain and analyze biometric data (e.g., pulse, heartrate, perspiration, etc.) for participants [0123]), and the movement (Leung: inputs from input devices and technologies such as […] motion tracking systems, gesture-based input systems, and so on that can be analyzed and used to determine events in respective broadcasts 442 [0127]).
Regarding claim 6:
Leung teaches:
The method of claim 1 (as shown above), wherein the bioinformation includes at least one of a heart rate, an electroencephalogram reading, or a respiration rate (Leung: biometric data (e.g., pulse, heartrate, perspiration, etc.) [0123]).
Regarding claim 7:
Leung teaches:
The method of claim 5, wherein the analyzing includes determining at least one of a change in a skin tone of the user, a blink rate of the user, or an eye movement of the user (Leung: The images may be analyzed, for example using techniques that detect emotions or other states via tracking and analysis of eye movements, blinking, dilation, and so on [0125]).
Regarding claim 8:
Leung teaches:
The method of claim 1 (as shown above), wherein: the determining of the emotional state of the user (Leung: The participant video may be analyzed, for example using facial recognition techniques and techniques that detect emotions [0148]) includes determining the emotional state of each user of a group of users including the user (Leung: In some embodiments, participant inputs 465 may include video of participants (broadcasters and/or spectators) [0148]), and each user of the group of users is engaged in the XR session (Leung: some embodiments may obtain images of participants' eyes (broadcasters, players, commentators and/or spectators) captured during game play or broadcasts 442, for example images captured by cameras attached to or integrated with […] virtual reality (VR) headsets [0148]; see also Note 1A).
Regarding claim 47:
Claim 47 is substantially similar to Claim 1 and is therefore rejected for similar reasons. Claim 47 contains the following notable differences:
Claim 47 recites that the user is engaged in a gaming session instead of an XR session. Leung teaches that: “embodiments are primarily described herein in the context of spectating systems that broadcast game play in multiplayer online gaming environments in which two or more players remotely participate in online game sessions” [0056].
Regarding claim 51:
Claim 51 is substantially similar to Claim 1 and is therefore rejected for similar reasons. Claim 51 contains the following notable differences:
Claim 51 claims a system instead of a method. Leung teaches a system: “A spectating system” (Abstract).
Regarding claim 55:
Claim 55 is substantially similar to Claim 5 and is therefore rejected for similar reasons. Claim 55 contains the following notable differences:
Claim 55 claims a system instead of a method. Leung teaches a system: “A spectating system” (Abstract).
Regarding claim 56:
Claim 56 is substantially similar to Claim 6 and is therefore rejected for similar reasons. Claim 56 contains the following notable differences:
Claim 56 claims a system instead of a method. Leung teaches a system: “A spectating system” (Abstract).
Regarding claim 57:
Claim 57 is substantially similar to Claim 7 and is therefore rejected for similar reasons. Claim 57 contains the following notable differences:
Claim 57 claims a system instead of a method. Leung teaches a system: “A spectating system” (Abstract).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 2 and 52 are rejected under 35 U.S.C. 103 as being unpatentable over Leung (WO 2017004433 A1) in view of Willette (US 10864447 B1 (see attached document for paragraph numbers)).
Regarding claim 2:
Leung teaches:
The method of claim 1 (as shown above), comprising:
Leung fails to teach:
training a model with at least one of the image, the video, the audio file, the bioinformation, the movement, the XR session, the emotional state, the causal event, the session information, or the physical artifact; and
during a subsequent XR session, performing at least one of the receiving, the analyzing, the determining, the identifying of the causal event, the identifying of the session information, or the generating based on the model.
Willette teaches:
training a model with at least one of the image, the video (Willette: A machine learning analysis engine 4710 of a machine learning analysis service 4700 may perform video analysis 4712 on input videos 4762 using a corpus of training data (93)), the audio file, the bioinformation, the movement, the XR session, the emotional state, the causal event, the session information, or the physical artifact; and
during a subsequent XR session (see Note 2A), performing at least one of the receiving, the analyzing, the determining, the identifying of the causal event, the identifying of the session information, or the generating based on the model (Willette: Auto-generation analysis modules 478 may, for example, include a machine learning analysis module or service that analyzes video content to determine highlights, and/or a statistically improbable analysis module or service that analyzes input game data to determine highlights. (60)).
Note 2A: It would be obvious to one of ordinary skill in the art to train the machine learning model before utilizing it in a subsequent XR session.
Regarding claim 52:
Claim 52 is substantially similar to Claim 2 and is therefore rejected for similar reasons. Claim 52 contains the following notable differences:
Claim 52 claims a system instead of a method. Leung teaches a system: “A spectating system” (Abstract).
Claims 3, 4, 53, and 54 are rejected under 35 U.S.C. 103 as being unpatentable over Leung (WO 2017004433 A1) in view of Willette (US 10864447 B1 (see attached document for paragraph numbers)) and Shen (US 20210247737 A1).
Regarding claim 3:
Leung in view of Willette teaches:
The method of claim 2 (as shown above), comprising
Leung in view of Willette fails to teach:
verifying the model by comparing the physical artifact with a quality metric.
Shen teaches:
verifying the model by comparing the physical artifact with a quality metric (Shen: The present invention provides an end-to-end solution that connects the entire printing manufacturing process to form a closed loop while taking into account more comprehensive factors that affect printing precision; see Note 3A).
Before the effective filing date, it would be obvious to one of ordinary skill in the art to combine the teachings of Shen with Leung in view of Willette. Training with the physical artifact, as in Shen, would benefit the Leung in view of Willette by ensuring the 3D printed model has minimal errors: “Compared with traditional methods, however, 3D printing technologies in the prior art generally have low precision when building objects, and thus cannot reach the optimal level to meet demands in some cases” (Shen, [0003]).
Note 3A: Shen teaches the generation of a “deformation network” created based on the physical 3D printed model and 3D model data: “A neural network-based error compensation method for 3D printing includes: compensating an input model by a deformation network/inverse deformation network constructed and trained according to a 3D printing deformation function/inverse deformation function, and performing the 3D printing based on the compensated model” (Abstract). Shen further teaches metrics such as “Precision”, “Accuracy”, and “Recall” that “are used as indices to test the learning performance of the neural network, and the optimal inverse deformation network is selected based on these indices” [0093]. Therefore, as best understood by the examiner, Shen teaches verifying the model by comparing the physical artifact with a quality metric.
Regarding claim 4:
Leung in view of Willette teaches:
The method of claim 2 (as shown above), wherein the model is trained with (Willette: Training inputs 4726 may include, but are not limited to, inputs from humans specifying, voting on, and/or ranking highlight segments being processed by training module 4730 for inclusion as additional highlight attributes 4732 in the machine learning analysis data 4750 (95))
the video (Leung: video or image inputs (e.g., video or images of a participant's facial expressions or eyes) [0064]), the audio file (Leung: audio or voice inputs such as in-game vocal communications or broadcast vocal channels [0064]), the bioinformation (Leung: biometric inputs from the players [0064]), the movement (Leung: participant inputs 465 may also include inputs from input devices and technologies such as […] motion tracking systems [0127]), the XR session (see Note 4B), the emotional state (Willette: The participant video may be analyzed, for example using facial recognition techniques and techniques that detect emotions via analysis of facial expressions, (83); see Note 4C), the causal event (Willette: Various emotions such as surprise, fear, happiness, intense concentration, and so on may be detected that may be correlated to in-game events (victories, defeats, startling in-game events, etc.) or to broadcast events (e.g., the broadcaster spilled a drink on his keyboard, fell out of his chair, etc.). (83); see Note 4C), the session information (Leung: The events and other information determined from analyzing the participant inputs may collectively be referred to as broadcast metadata. The spectating system may generate broadcast content for respective broadcasts at least in part from the broadcast metadata [0064]; see Note 4D),
Note 4A: Willette teaches: “Training inputs 4726 may include, but are not limited to, inputs from humans specifying, voting on, and/or ranking highlight segments being processed by training module 4730 for inclusion as additional highlight attributes 4732 in the machine learning analysis data 4750” (paragraph (95)).
Willette teaches that the “humans specifying, voting on, and/or ranking highlight segments” are the broadcaster/participants: “spectators and/or broadcasters may vote on events in broadcast 412 streams or game sessions via respective spectating system clients to determine if the event is to be a highlight 458B.” (paragraph (73)).
In other words, in the input from the human may not necessarily be just the vote/ranking input, and may include other inputs from the broadcaster/participants. Accordingly, when combined with the teachings of Leung, one of ordinary skill in the art would understand that any participant input could be utilized to determine a highlight by the machine learning model.
Note 4B: The Examiner submits that training the machine learning model on video data and audio data of the XR session would be understood by one of ordinary skill in the art to be the same as training on the XR session itself.
Note 4C: Similarly to Leung, Willette teaches that video data may be analyzed, and that “A machine learning analysis engine 4710 of a machine learning analysis service 4700 may perform video analysis 4712 on input videos 4762 using a corpus of training data 4750” (paragraph (93)). Willette also teaches that “Machine learning analysis data 4750 may be trained on and store highlight attributes 4732” (paragraph (93)). Therefore, any analyzed data obtained from the video may also be used for training. In paragraph (83) as cited above, Willette teaches that emotions and the causal event may be obtained from the video.
Note 4D: In [0064] as cited above, Leung teaches “broadcast metadata” obtained from analyzing the inputs of the participants, which is then used to generate broadcast content. In Note 4A, it was discussed that “any participant input could be utilized to determine a highlight by the machine learning model.” The Examiner submits that it would be obvious to one of ordinary skill in the art to train on the broadcast metadata because it was determined from analyzing participant inputs.
Leung and Willette fails to explicitly teach:
wherein the model is trained with the physical artifact.
Shen teaches:
wherein the model is trained with the physical artifact (Shen: Training samples of the deformation network/inverse deformation network include to-be-printed model samples and printed model samples, Abstract).
Before the effective filing date, it would be obvious to one of ordinary skill in the art to combine the teachings of Shen with Leung in view of Willette. Training with the physical artifact, as in Shen, would benefit the Leung in view of Willette by ensuring the 3D printed model has minimal errors: “Compared with traditional methods, however, 3D printing technologies in the prior art generally have low precision when building objects, and thus cannot reach the optimal level to meet demands in some cases” (Shen, [0003]).
Regarding claim 53:
Claim 53 is substantially similar to Claim 3 and is therefore rejected for similar reasons. Claim 53 contains the following notable differences:
Claim 53 claims a system instead of a method. Leung teaches a system: “A spectating system” (Abstract).
Regarding claim 54:
Claim 54 is substantially similar to Claim 4 and is therefore rejected for similar reasons. Claim 54 contains the following notable differences:
Claim 54 claims a system instead of a method. Leung teaches a system: “A spectating system” (Abstract).
Claims 9 and 58 are rejected under 35 U.S.C. 103 as being unpatentable over Leung (WO 2017004433 A1) in view of Deaver (US 20140247989 A1).
Regarding claim 9:
Leung teaches:
The method of claim 8 (as shown above), wherein: the identifying of the causal event includes
Leung fails to explicitly teach:
comparing the emotional state of each user of the group of users engaged in the XR session, and the causal event occurs based on a significant shift in the emotional state of a first user of the group of users relative to the emotional state of a second user of the group of users.
Deaver teaches:
comparing the emotional state of each user of the group of users (Deaver: Changes in the emotional state .DELTA.(emotional state) of User A may be compared with changes in the emotional state .DELTA.(emotional state) of User B during a given time period [0085]) engaged in the XR session (see Note 9A), and the causal event occurs based on a significant shift in the emotional state (Deaver: events 442 may be time correlated to allow monitoring of emotional state or .DELTA.(emotional state) with respect to event(s) 442. [0086]) of a first user of the group of users relative to the emotional state of a second user of the group of users (Deaver: Method 350 tracks relative relationships of emotional state .DELTA.(emotional state). The values of emotional state may not be concrete valuations (i.e., "12.9=happy, 4.3=sad") but rather are used as comparators ("User A with a value of 12.9 is relatively happier than User B with a value of 4.3 [0083]).
Note 9A: When the teachings of Deaver are applied to Leung, the users would be engaged in an XR session, as Leung teaches obtaining emotional data via the virtual reality images: “some embodiments may obtain images of participants' eyes […] by cameras attached to or integrated with […] virtual reality (VR) headsets, and the like. The images may be analyzed, for example using techniques that detect emotions” [0148].
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Deaver with Leung. Having the identifying of the causal event include comparing the emotional state of each user of the group of users engaged in the XR session, and the causal event occurs based on a significant shift in the emotional state of a first user of the group of users relative to the emotional state of a second user of the group of users, as in Deaver, would benefit the Leung teachings by enabling individualized detection of which users experienced a large change in emotional state because of an event and which ones were not affected.
Regarding claim 58:
Leung teaches:
The system of claim 51 (as shown above), wherein: the determining of the emotional state of the user (Leung: The participant video may be analyzed, for example using facial recognition techniques and techniques that detect emotions [0148]) includes determining the emotional state of each user of a group of users including the user (Leung: In some embodiments, participant inputs 465 may include video of participants (broadcasters and/or spectators) [0148]), and each user of the group of users is engaged in the XR session (Leung: some embodiments may obtain images of participants' eyes (broadcasters, players, commentators and/or spectators) captured during game play or broadcasts 442, for example images captured by cameras attached to or integrated with […] virtual reality (VR) headsets [0148]; see also Note 1A),
Leung fails to explicitly teach:
the identifying of the causal event includes comparing the emotional state of each user of the group of users engaged in the XR session, and the causal event occurs based on a significant shift in the emotional state of a first user of the group of users relative to the emotional state of a second user of the group of users.
Deaver teaches:
comparing the emotional state of each user of the group of users (Deaver: Changes in the emotional state .DELTA.(emotional state) of User A may be compared with changes in the emotional state .DELTA.(emotional state) of User B during a given time period [0085]) engaged in the XR session (see Note 9A), and the causal event occurs based on a significant shift in the emotional state (Deaver: events 442 may be time correlated to allow monitoring of emotional state or .DELTA.(emotional state) with respect to event(s) 442. [0086]) of a first user of the group of users relative to the emotional state of a second user of the group of users (Deaver: Method 350 tracks relative relationships of emotional state .DELTA.(emotional state). The values of emotional state may not be concrete valuations (i.e., "12.9=happy, 4.3=sad") but rather are used as comparators ("User A with a value of 12.9 is relatively happier than User B with a value of 4.3 [0083]).
Before the effective filing date of the claimed invention, it would have been obvious to a person having ordinary skill in the art to combine the teachings of Deaver with Leung. Having the identifying of the causal event include comparing the emotional state of each user of the group of users engaged in the XR session, and the causal event occurs based on a significant shift in the emotional state of a first user of the group of users relative to the emotional state of a second user of the group of users, as in Deaver, would benefit the Leung teachings by enabling individualized detection of which users experienced a large change in emotional state because of an event and which ones were not affected.
Claims 101 and 102 are rejected under 35 U.S.C. 103 as being unpatentable over Leung (WO 2017004433 A1) in view of XR Collaboratory (NPL: Michael Kass - Computer Vision and the Metaverse (CV4ARVR 2022)).
Regarding claim 101:
Leung teaches:
The method of claim 1 (as shown above), wherein:
generating the physical artifact comprises generating a three-dimensional printed object representing at least a portion of the virtual environment (Leung: 3D printing technology may be used to print physical objects based on input designs or specifications of game-related objects or items (e.g., in-game characters, weapons, vehicles, monsters, etc.) [0175]).
Leung fails to teach:
the session information comprises a scene descriptor format defining a virtual environment of the XR session, and
generating the physical artifact comprises generating a three-dimensional printed object representing at least a portion of the virtual environment defined by the scene descriptor format.
XR Collaboratory teaches:
the session information comprises a scene descriptor format defining a virtual environment of the XR session (see Note 101A)
Note 101A: The specification of the present application recites: “3D models may be captured using a scene descriptor format such as universal scene descriptor (USD), GL transmission format binary file (GLB), GL transmission file (glTF), immersive technology media format (ITMF), ORBX, and the like” [0041].
At 8:00 to 8:24 in the video, the presenter states “we need a representation that’s open and very capable that can describe virtual worlds with the richness that we need for the full variety of experiences that we want in the metaverse. We believe that USD is the right answer to that.” Therefore, when combining the teachings of XR Collaboratory with Leung, it would be obvious to one of ordinary skill in the art to utilize the universal scene descriptor (USD) format to define a virtual environment of the XR session.
PNG
media_image1.png
775
1295
media_image1.png
Greyscale
Snapshot of 8:11 of the XR Collaboratory reference.
Before the effective filing date, it would be obvious to combine the teachings of XR Collaboratory with Leung. Having the session information comprise a scene descriptor format defining a virtual environment of the XR session, as in XR Collaboratory, would benefit the Leung teachings because USD is easily extensible and scalable (XR Collaboratory, 9:18, as shown below)
PNG
media_image2.png
776
1300
media_image2.png
Greyscale
9:18 of XR Collaboratory. The slide states that USD “can scale to large data sets with lazy loading, levels of detail” and is “easily extensible for custom data schemas, input and output formats, and methods for asset search”.
Leung in view of XR Collaboratory teaches:
generating the physical artifact comprises generating a three-dimensional printed object representing at least a portion of the virtual environment defined by the scene descriptor format (see Note 101B).
Note 101B: When the teachings of XR Collaboratory are combined with Leung, the portion of the virtual environment to be printed would be defined with the USD format.
Regarding claim 102:
Claim 102 is substantially similar to Claim 101 and is therefore rejected for similar reasons. Claim 102 contains the following notable differences:
Claim 102 recites that the user is engaged in a gaming session instead of an XR session. Leung teaches that: “embodiments are primarily described herein in the context of spectating systems that broadcast game play in multiplayer online gaming environments in which two or more players remotely participate in online game sessions” [0056].
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to VINCENT ALEXANDER PROVIDENCE whose telephone number is (571)270-5765. The examiner can normally be reached Monday-Thursday 8:30-5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached on (571)270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/VINCENT ALEXANDER PROVIDENCE/Examiner, Art Unit 2617
/KING Y POON/Supervisory Patent Examiner, Art Unit 2617