DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-3, 5-7,10-12 and 14-16 are rejected under 35 U.S.C. 103 as being unpatentable over Kavallierou (US Pat. Pub. No. 20200346109 “Kavallierou”) in view of Paez et al. (US Pat. Pub. No. 20230251705 “Paez”).
Regarding claim 10 Kavallierou teaches A server comprising: a communication interface; and a controller (integral part of game server) configured to: establish, via the communication interface, a first communication link with a first client device and a second communication link with a second client device (“[0030] In the present disclosure, each video game playing device provides a respective participant of a video game session with access to an online multiplayer game. For this reason, each computing device is referred to as a client device. The video game session may correspond to e.g. a sports or combat match, such as a battle-royale mode combat match. Each client device is in communication with a central game server, via a communications network. The game server receives and processes each player's input and generates an authoritative source of events occurring within the video game”), the server configured to
generate scene rendering data for a multi-user virtual platform, the first client device and the second client device configured to process respective scene rendering data to render respect scenes of the multi-user virtual platform (“[0040] At a first step S501, player inputs and game state information from a plurality of client devices participating in a video game session is obtained. The player inputs may correspond to e.g. controller inputs received at a controller device that each participant is using to control a respective avatar within a shared virtual environment. The game state information may include player state information such as e.g. the location of a respective player in the shared virtual environment, a game mode associated with a respective player (e.g. team mode), an identity associated with a respective player, the identities of other players on the same team as that player, etc. The game state information may also include information about the state of the virtual environment, such as e.g. a game mode (e.g. map shrinking), physics calculations that are to be performed, etc”);
responsive to determining that the second client device has higher processing capability than the first client device, select the second client device as a rendering peer for the first client device (“[0046] At a third step S503, the at least one client device that is determined as being likely to render the video game instance at a quality that is less than a threshold quality, is identified. [0049] At a fourth step S504, a cloud game client (CGC) is allocated to the identified client device. The cloud game client may correspond to a duplicate of a video game client implemented at the client devices. The cloud game client may comprise a pool of an instance of hardware available on the network (forming part of ‘the cloud’) having more powerful CPU and GPU capabilities than the client device”);
establish communication between the first client device and the second client device (“[0078] Whilst the above system has been described in relation to client devices operable to connect to one or more servers forming a cloud gaming service, it will be appreciated that in some examples, the cloud gaming service may be formed of a peer-to-peer network of remote client devices connected to one another and a local client device, via a communications network”); and
transmit, via the communication interface, via the second communication link, to the second client device: first scene rendering data associated with the first client device; and a command to control the second client device to (“[0050] The cloud game client may be configured to create a gameplay session for those game clients that are about to experience a low frame rate (or visual fidelity) experience. Hence, at a fifth step S505, the method may comprise providing the player inputs from the at least one identified client device and the obtained game state information, to the cloud game client. The player inputs and the obtained game state information may be provided from e.g. the game server to the one or more cloud devices at which the cloud game client is being implemented”):
generate, as the rendering peer for the first client device, a first rendered scene from the first scene rendering data (“[0051] At step S506, the method comprises rendering, based on the obtained game state information and player inputs provided to the cloud game client, the video game instance for the identified client device. This rendering may involve generating a video stream for outputting at a display associated with the identified client device”), and
transmit the first rendered scene to the first client device for providing at the first client device (“[0052] At step S507, the method comprises transmitting the video game instance rendered by the cloud game client to the corresponding (identified) client device. The video game instance, in the form a video stream, may be transmitted to the client device via the communications network”) but is silent about transmit second scene rendering data associated with the second client device; generate a second rendered scene from the second scene rendering data, the second rendered scene for providing at the second client device;
Paez teaches transmit second scene rendering data associated with second client device; generate a second rendered scene from the second scene rendering data, the second rendered scene for providing at the second client device (“[0101] At operation 406, a server device transmits VR data to a second computing device. The VR data may include model data, 3D mesh data, asset data, state data, VR format files, or one or other data assemblies indicating information for enabling the second computing device to render and/or interact with the cooperative VR environment or component elements thereof. [0102] At operation 408, the second computing device renders a cooperative VR environment on a second VR display. Specifically, the second computing device renders the cooperative VR environment on the second VR display based on the VR data, such that an associated user is provided with an immersive experience within the cooperative VR environment”);
Kavallierou and Paez are analogous art as both of them are related to rendering.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Kavallierou by transmitting second scene rendering data associated with second client device; generating a second rendered scene from the second scene rendering data, the second rendered scene for providing at the second client device as taught by Paez.
The motivation for the above is to maximize the utilization of second device.
Claim 1 is directed to a method claim and its steps are similar in scope and functions performed by the device claim 10 and therefore claim 1 is also rejected with the same rationale as specified in the rejection of claim 10.
Regarding claims 2 and 11 Kavallierou modified by Paez teaches wherein establishing, via the server, communication between the first client device and the second client device comprises: transmitting, via the server, to one or more of the first client device and the second client device, connectivity information to establish a third communication link therebetween, such that communication between the first client device and the second client device occurs via the third communication link, or wherein establishing communication between the first client device and the second client device occurs via the server as an intermediary between the first client device and the second client device (Kavallierou Fig. 6 shows communication between game client and cloud game client through game server).
Regarding claims 3 and 12 Kavallierou modified by Paez teaches further comprising, after communication between the first client device and the second client device is established: receiving, at the server, via the first communication link, from the first client device, respective virtual location data indicating one or more of a position and an orientation of a first virtual user in the multi-user virtual platform, the first virtual user associated with the first client device; updating, via the server, the first scene rendering data based on the respective virtual location data; and transmitting, via the server, via the second communication link, to the second client device, the first scene rendering data, as updated, to cause the second client device to generate, as the rendering peer for the first client device, an updated first rendered scene and transmit the updated first rendered scene to the first client device (Kavallierou “[0054] The method may also comprise obtaining, via the communications network, updated game state information. That is, the method may comprise continuously monitoring the game state information. [0067] As described previously, the game state information provides information indicative of a density of players concurrently occupying or likely to occupy a region of the virtual environment, and includes a status of the players participating in the video game session (e.g. location in virtual environment, team mode) and a status of the virtual environment (e.g. map shrinking) This information enables the monitoring unit 606 to predict whether the game requirements for a given player are likely to exceed the e.g. GPU and CPU capabilities of that player's client device.”).
Regarding claims 5 and 14 Kavallierou modified by Paez teaches further comprising determining, via the server, that the second client device has higher processing capability than the first client device by: comparing respective processing capabilities of the first client device and the second client device, the respective processing capabilities comprising one or more of: respective numbers of images per second that each of the first client device and the second client device are capable of rendering; and respective speeds of respective processors of the first client device and the second client device (Kavallierou “[0041]…..In this way, step S502 comprises predicting, based on the obtained game state information, whether at least one of the CPU usage, GPU usage and network traffic associated with one or more players is likely to negatively impact their experience of the video game session. The threshold quality may correspond to e.g. a predetermined frame rate, visual fidelity, etc” Here first client’s speed (frame rate) is lower than predetermined and CGC’s speed is higher than predetermined) .
Regarding claims 6 and 15 Kavallierou modified by Paez teaches further comprising: comparing a respective processing capability of the second client device to a threshold processing capability; and responsive to determining that the respective processing capability of the second client device is higher than the threshold processing capability, proceeding with selecting the second client device as the rendering peer for the first client device; and responsive to determining that the respective processing capability of the second client device is less than the threshold processing capability, failing to proceed with selecting the second client device as the rendering peer for the first client device; and one or more of: transmitting the first scene rendering data to the first client device such that the first client device generates the first rendered scene; and transmitting the first scene rendering data to a cloud rendering device that generates the first rendered scene and provides the first rendered scene to the first client device (Kavallierou “[0055] Determining that an identified client device is no longer likely to render its video game instance at a quality that is below the threshold quality may be determined by one or more of the game server, monitoring server, or the identified client device. In the latter case, it may be that the identified client device is able to determine this based on game state information obtained from the game server.
[0056] In response to such a determination, the corresponding video game instances may no longer be rendered at the cloud game client. This may involve, for example, transmitting an instruction to the identified client devices, instructing them to revert to rendering their own respective video game instances”).
Regarding claims 7 and 16 Kavallierou modified by Paez teaches further comprising, prior to communication between the first client device and the second client device being established, one or more of: transmitting the first scene rendering data to the first client device such that the first client device generates the first rendered scene; and transmitting the first scene rendering data to a cloud rendering device that generates the first rendered scene and provides the first rendered scene to the first client device (Paez “[0099]…… As shown in FIG. 4, at operation 402, a server device transmits VR data to a first computing device. The VR data may include model data, 3D mesh data, asset data, state data, VR format files, or one or other data assemblies indicating information for enabling the first computing device to render and/or interact with the cooperative VR environment or component elements thereof (e.g., objects, structures, textures, materials, groups, etc.)”).
Claim(s) 4 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Kavallierou modified by Paez as applied to claims 1 and 10 above, and further in view of ZHANG et al. (US Pat. Pub. No. 20140213361 “Zhang”).
Regarding claims 4 and 13 Kavallierou modified by Paez is silent about further comprising, after communication between the first client device and the second client device is established: receiving, at the server, via the second communication link, from the second client device, respective virtual location data indicating one or more of a position and an orientation of a second virtual user in the multi-user virtual platform, the second virtual user associated with the second client device;
Zhang teaches after communication between first client device and second client device is established: receiving, at the server, via second communication link, from the second client device, respective virtual location data indicating one or more of a position and an orientation of a second virtual user in the multi-user virtual platform, the second virtual user associated with the second client device scene image according to the current position of the second smart terminal (“[0021] second smart terminal, for shooting and displaying second smart terminal user reality scene image on the screen, superimposing a second virtual character on the second smart terminal user reality scene image, and sending information of the second virtual position of the second virtual character in the second smart terminal user reality scene image to the interactive server; acquiring position of the second smart terminal during movement, determining whether change of the position of the second smart terminal exceeds a preset threshold value, and if the result is "YES", moving the second virtual character in the second smart terminal user reality”);
Zhang and Kavallierou modified by Paez are analogous art as both of them are related to game design.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Kavallierou modified by Paez by including, after communication between first client device and second client device is established: receiving, at the server, via second communication link, from the second client device, respective virtual location data indicating one or more of a position and an orientation of a second virtual user in the multi-user virtual platform, the second virtual user associated with the second client device scene image according to the current position of the second smart terminal as taught by Zhang.
The motivation for the above is to have interactive gaming thereby provide enjoyment to user.
Kavallierou modified by Paez and Zhang teaches and updating, via the server, the second scene rendering data based on the respective virtual location data; and transmitting, via the server, via the second communication link, to the second client device, the second scene rendering data, as updated, to cause the second client device to generate an updated second rendered scene for providing at the second client device (Paez “[0101] At operation 406, a server device transmits VR data to a second computing device. The VR data may include model data, 3D mesh data, asset data, state data, VR format files, or one or other data assemblies indicating information for enabling the second computing device to render and/or interact with the cooperative VR environment”).
Claim(s) 8 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Kavallierou modified by Paez as applied to claims 1 and 10 above, and further in view of SONG et al. (US Pat. Pub. No. 20210227151 “Song”).
Regarding claims 8 and 17 Kavallierou modified by Paez is silent about wherein first rendered scene and the second rendered scene have about a same resolution, or wherein first rendered scene and the second rendered scene are at about a respective maximum resolution of respective visual rendering devices of the first client device and the second client device.
Song teaches first rendered scene and the second rendered scene have about a same resolution, or wherein first rendered scene and the second rendered scene are at about a respective maximum resolution of respective visual rendering devices of the first client device and the second client device (“[0017] According to the first aspect or the second aspect, in a possible design, the resolution of the second image is the same as the resolution of the first image. [0058]…… Once the first terminal and the second terminal establish communication through the server, the first terminal and the second terminal may transmit data, such as an image file and an instruction, to each other through the server, to implement a real-time image or video preview, display picture adjustment, photographing control, photo backhaul, group photo generation, and the like”);
Song and Kavallierou modified by Paez are analogous art as both of them are related to image processing.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Kavallierou modified by Paez by including first rendered scene and the second rendered scene have about a same resolution, or wherein first rendered scene and the second rendered scene are at about a respective maximum resolution of respective visual rendering devices of the first client device and the second client device as taught by Song.
The motivation for the above is to make sure the image rendered at the second device can be properly displayed in the first device.
Claim(s) 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Kavallierou modified by Paez as applied to claims 1 and 10 above, and further in view of Tuman (US Pat. Pub. No. 20120066315 “Tuman”).
Regarding claims 9 and 18 Kavallierou modified by Paez is silent about further comprising partnering the first client device and the second client device with each other on a basis of one or more of: a difference in respective processing capabilities between the first client device and the second client device being above a threshold difference; similarities between the first rendered scene and the second rendered scene; a proximity of the first client device and the second client device; and stability of a third communication link established between the first client device and the second client device over which the first client device and the second client device communicate.
Tuman teaches partnering the first client device and the second client device with each other on a basis of one or more of: a difference in respective processing capabilities between the first client device and the second client device being above a threshold difference; similarities between the first rendered scene and the second rendered scene; a proximity of the first client device and the second client device; and stability of a third communication link established between the first client device and the second client device over which the first client device and the second client device communicate (“[0283] receiving from a first user proof that the first user has obtained an article of apparel incorporating a first of the plurality of combinations comprising a first alphanumeric identifier, a first image, and a first color; [0284] receiving identifying information from the first user; [0285] generating a resource for the first user, in which the resource is generated based on the identifying information; [0286] storing, in association with one another, indications of the first alphanumeric identifier, the first image, the first color, and the identifying information; [0287] generating a first visual identifier based on the first alphanumeric identifier, the first image, and the first color; [0288] receiving from a second user a second alphanumeric identifier; [0289] receiving from the second user an indication of a second image; [0290] receiving from the second user an indication of a second color; [0291] determining that the second alphanumeric identifier matches the first alphanumeric identifier, the second image matches the first image, and the second color matches the first color”);
Tuman and Kavallierou modified by Paez are analogous art as both of them are related to image processing.
Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of claimed invention to have modified Kavallierou modified by Paez by partnering the first client device and the second client device with each other on a basis of one or more of: a difference in respective processing capabilities between the first client device and the second client device being above a threshold difference; similarities between the first rendered scene and the second rendered scene; a proximity of the first client device and the second client device; and stability of a third communication link established between the first client device and the second client device over which the first client device and the second client device communicate as taught by Tuman.
The motivation for the above is to decide whether image of first device should be sent to second device to make sure benefit of rendering at a second device is worthy.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAPTARSHI MAZUMDER whose telephone number is (571)270-3454. The examiner can normally be reached 8 am-4 pm PST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said Broome can be reached at (571)272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAPTARSHI MAZUMDER/Primary Examiner, Art Unit 2612