Prosecution Insights
Last updated: April 19, 2026
Application No. 18/728,607

A DATA PROCESSING METHOD, A SYSTEM, A MEDIUM, AND A COMPUTER PROGRAM PRODUCT

Non-Final OA §101§103§112
Filed
Jul 12, 2024
Examiner
HUYNH, THANG GIA
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Shanghai Lilith Technology Corporation
OA Round
1 (Non-Final)
76%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
19 granted / 25 resolved
+14.0% vs TC avg
Strong +50% interview lift
Without
With
+50.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
21 currently pending
Career history
46
Total Applications
across all art units

Statute-Specific Performance

§101
2.3%
-37.7% vs TC avg
§103
73.9%
+33.9% vs TC avg
§102
7.7%
-32.3% vs TC avg
§112
11.5%
-28.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 25 resolved cases

Office Action

§101 §103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claim 1 is objected to because of the following informalities: Claim 1 line 1, “a application program” should read “an application program”. Appropriate correction is required. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 5 and 6 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 5 recites the limitation "the application server" in lines 5 and 9. There is insufficient antecedent basis for this limitation in the claim. Note that base Claim 1 recites “a application program” and not thus no mention of an application server. Claim 5 recites the limitation “the jump server” in line 8. There is insufficient antecedent basis for this limitation in the claim. Similar to Claim 5, Claim 6 recites both "the application server" in lines 4, 5, and 6 and “the jump server” in line 6. Claim 6 recites the limitation “all application servers” in line 2. Claim 6 recites the limitation “the associated application server” in line 3. There is insufficient antecedent basis for these limitation in the claim. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 12 and 13 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter. Claim 12 recites a computer-readable storage medium. The broadest reasonable interpretation of a claim drawn to a computer readable storage medium (also called machine readable medium and other such variations) typically covers forms of non-transitory tangible media and transitory propagating signals per se in view of the ordinary and customary meaning of computer readable storage media, particularly when the specification is silent. See MPEP 2111.01. When the broadest reasonable interpretation of a claim covers a signal per se, the claim must be rejected under 35 U.S.C. 101 as covering non-statutory subject matter. The USPTO recognizes that applicants may have claims directed to computer readable storage media that cover signals per se, which the USPTO must reject under 35 U.S.C. 101 as covering both non-statutory subject matter and statutory subject matter. A claim drawn to such a computer readable storage medium that covers both transitory and non-transitory embodiments may be amended to narrow the claim to cover only statutory embodiments to avoid a rejection under 35 U.S.C. $ I01 by adding the limitation "non-transitory" to the claim. Such an amendment would typically not raise the issue of new matter, even when the specification is silent because the broadest reasonable interpretation relies on the ordinary and customary meaning that includes signals per se. Applicant’s specification in Page 24 Lines 22-33 recites, “one or more transient or non-transient machine readable storage (e.g., computer-readable) media” which includes, but are not limited to “any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), such as a floppy disk, an optical disk, an optical disk read-only memory (CD-ROMs), a magneto-optical disk, a read-only memory (ROM), a random access memory (RAM), an erasable programmable read-only memory (EPROM), an electronically erasable programmable read-only memory (EEPROM), a magnetic or optical card, or a flash or tangible machine-readable memory for transmitting network information through electrical, optical, acoustic, or other forms of signals (e.g., a carrier wave, an infrared signal, a digital signal, etc.). Thus, a machine-readable medium includes any form of machine-readable medium suitable for storing or transmitting electronic instructions or machine (e.g., computer) readable information.” Since Applicant’s disclosure does not limit the definition of a “machine readable storage (e.g., computer-readable) media”, it could be a signal. As an additional note, a non-transitory computer readable storage medium having executable programming instructions stored thereon is considered statutory as non-transitory computer readable media excludes transitory data signals. Claim 13 recites “A computer program product . . .” with the body of the claim reciting computer program steps, which are nothing more than just programmed instructions to be performed by the system. Therefore the steps/elements recited in claim 13 are non-statutory. Similarly, computer programs claimed as computer listings per se, i.e., the descriptions or expressions of the programs, are not physical “things.” They are neither computer components nor statutory processes, as they are not “acts” being performed. Such claimed computer programs do not define any structural and functional interrelationships between the computer program and other claimed elements of a computer which permit the computer program’s functionality to be realized. In contrast, a claimed non-transitory computer-readable medium encoded with a computer program is a computer element which defines structural and functional interrelationships between the computer program and the rest of the computer which permit the computer program’s functionality to be realized, and is thus statutory. Accordingly, it is important to distinguish claims that define descriptive material per se from claims that define statutory inventions. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-2, 5, 8, and 12-13 are rejected under 35 U.S.C. 103 as being unpatentable over Sternberg et al. (US 20180165914 A1)(Hereinafter referred to as Sternberg) in view of Xu (CN 105892650 A). Regarding Claim 1, Sternberg discloses A data processing method, applied to a application program that provide human-computer interaction based on virtual scenes, the data processing method includes: (See Abstract, “A method and system for facilitating the transfer of game or virtual reality (VR) state information is disclosed herein.” See [0063], “Server 160 may include a game/VR server, a gaming server hosting a massively multiplayer online (MMO) game, an Augmented Reality (AR) server and/or other types of similar servers.” In this case, the game server hosting the game corresponds to “application program”.) a determination step: determining whether a transfer request for transferring a virtual character from a source game to a destination client has been received; (See Fig. 4 showing Game/VR State Transfer Engine 406, and Destination Game/VR Client 408. See [0106], “Although not shown in FIG. 4, the Game/VR Server 402 and/or the Game/VR State Transfer Engine 406 may communicate with a Source Game/VR Client (e.g., the original owner of the transferred state) . . .” See [0105], “In Step 4, the Game/VR State Transfer Engine 406 may then send a request for game/VR State to the Game/VR Server 402. In Step 5, the Game/VR Server 402 may respond with the current game/VR State. In Step 6, the Game/VR State Transfer Engine 406 may then deliver the game/VR state to the Spectator Game/VR Client 408, which may be the destination client.” Lastly see [0081], “As used herein, game state may be defined as the information about the current status of a specific game, or avatar within a game, that is associated with a user's game account. . . Game state information to enable this feature includes but is not limited to current character profile and configuration settings, inventory, current location and orientation, quest status, achievements and affiliations, and state transfer history and availability.” In this case, the Game/VR State Transfer Engine can determine a transfer request for game state and game state would encompass a virtual character.) a generation step: when the transfer request is received, generating transmission data associated with the virtual character; (See [0081] teaching game state information including character profile, configuration settings, current location and orientation (transmission data associated with the virtual character). Also see [0105] teaching delivering the game state. Note that since the game state was deliver, that implies a generation of transmission data.) a transfer step: transferring the virtual character from the source game to the destination client on the basis of the transmission data and global data associated with the virtual character. (See [0105] teaching delivering the game state from the source game to the destination client corresponding to “transferring the virtual character”. See [0081] teaching game state information including character profile, configuration settings, current location and orientation (transmission data) as well as “avatar” and “inventory” data (global data). Lastly see [0094], “Game state information elements may also include, but are not limited to, one or more character properties such as a character type, an experience level, tasks achieved, an inventory of objects owned, points accumulated, character health (e.g., “health points”), amount of money or gold owned by the character, an energy level, skills attained, spells or special moves learned, and the like.”) However, Sternberg fails to explicitly disclose a determination step: determining whether a transfer request for transferring at least one virtual character from a first virtual scene to a second virtual scene has been received; a transfer step: transferring the virtual character from the first virtual scene to the second virtual scene on the basis of the transmission data and global data associated with the virtual character. Xu teaches transferring at least one virtual character from a first virtual scene to a second virtual scene has been received; (See Abstract, “displaying a first virtual three-dimensional space scene, detecting the first operation body; determining at least part of a virtual object corresponding to the first operation of the first virtual three-dimensional space in a scene, respectively obtaining the said at least part of a virtual object corresponding to the data, to another electronic device for sending data of the at least part of the virtual object, so that the other electronic device can be in a second virtual three-dimensional space scene. presenting the at least portion of the virtual object.” Also see Page 2 Paragraph 16, “transmitting data to the another electronic device of the all virtual objects so that in the second virtual three-dimensional space scene presenting the first virtual three-dimensional scene.” Lastly see Page 5 Paragraph 10, “Specifically, in the first virtual three-dimensional spatial scene comprises a plurality of virtual objects, such as first virtual three-dimensional space in a scene comprises the character, castle, tree, road and so on, the electronic device according to the first operation, determining the user selects which virtual object on the first scene in a virtual three-dimensional space.” Here, Xu teaches transferring all virtual objects (at least one character) from a first virtual scene to a second virtual scene.) transferring the virtual character from the first virtual scene to the second virtual scene on the basis of the transmission data and global data associated with the virtual character. (See Abstract and Page 2 Paragraph 16 teaching a first virtual three-dimensional space scene (the first virtual scene), a second virtual three-dimensional space scene (the second virtual scene) and transferring virtual objects which could comprise characters. In combination with Sternberg teaching game state information (transmission and global data associated with the virtual character), above limitation is taught.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sternberg with Xu to include having a first and second virtual scene, as well as the transferal of at least one virtual character. The motivation to combine Sternberg with Xu would have been obvious as both arts are within the same field of transferring data relating to games. Xu simply shows that having a first and second virtual scene, as well as transfer more than one virtual character, is a common and well-known practice within the art (See Xu Abstract and Page 2 Paragraph 16). Regarding Claim 2, Sternberg in view of Xu disclose The data processing method of claim 1, wherein the application program includes at least one subprogram, the subprogram includes at least one virtual scene, and the first virtual scene and the second virtual scene belong to the same or different subprograms. (See Sternberg [0063], “Server 160 may include a game/VR server, a gaming server hosting a massively multiplayer online (MMO) game, an Augmented Reality (AR) server and/or other types of similar servers.” The game can be considered as a “subprogram”. Also see Xu Abstract and Page 2 Paragraph 16 teaching a first virtual three-dimensional space scene (the first virtual scene), a second virtual three-dimensional space scene (the second virtual scene). Note that games would obvious be implied to have virtual scenes. The motivation to combine would have been similar to that of Claim 1.) Regarding Claim 5, Sternberg in view of Xu disclose The data processing method of claim 2, wherein the global data is stored in a long-term storage server, and the global data includes at least one of model information, level information, and equipment information of the virtual character; See Sternberg [0081], “As used herein, game state may be defined as the information about the current status of a specific game, or avatar within a game, that is associated with a user's game account. . . “ See Sternberg [0094], “Game state information elements may also include, but are not limited to, one or more character properties such as a character type, an experience level, tasks achieved, an inventory of objects owned, . . .” See Sternberg [0143], “This may be accomplished by periodically (e.g., every few seconds) compressing the game/VR state via well-known compression techniques and storing the compressed game/VR state along with the game multimedia (either multiplexed with the multimedia or separately stored by a game/VR state storage server.)” In this case, avatar within a game corresponds to “model information”, experience level corresponds to “level information”, and inventory of objects owned corresponds to “equipment information”.) the virtual character generates subprogram data when realizing human-computer interaction in the virtual scene, the subprogram data is stored in the application server, the subprogram data includes at least one of the position parameters and motion posture parameters of the virtual character; (See Sternberg [0081], “Game state information to enable this feature includes but is not limited to current character profile and configuration settings, inventory, current location and orientation, quest status, achievements and affiliations, and state transfer history and availability.” Note that if a game (subprogram) is being played with a virtual character, then data (subprogram data) would obviously be generated when the character interacts with the virtual scene. Also current location and orientation would correspond to “motion posture parameters of the virtual character”.) the transmission data is generated by the jump server after extracting subprogram data required for transfer from the application server corresponding to the first virtual scene and processing it, to support the transfer of the virtual character from the first virtual scene to the second virtual scene. (See Sternberg [0081] and [0094] teaching game state information (subprogram data required for transfer). See Xu Abstract, Page 2 Paragraph 16, and Page 5 Paragraph 10 teaching transferring virtual objects from a first virtual scene to a second virtual scene. The motivation to combine would have been similar to that of Claim 1.) Regarding Claim 8, Sternberg in view of Xu disclose A data processing system that provides human-computer interaction application program based on virtual scenes, including: (See Sternberg Abstract, “A method and system for facilitating the transfer of game or virtual reality (VR) state information is disclosed herein.” See Sternberg [0063], “Server 160 may include a game/VR server, a gaming server hosting a massively multiplayer online (MMO) game, an Augmented Reality (AR) server and/or other types of similar servers.” In this case, the game server hosting the game corresponds to “application program”.) a first application server that provides a first virtual scene; a second application server that provides a second virtual scene; (See Sternberg [0063], “Server 160 may include a game/VR server, a gaming server hosting a massively multiplayer online (MMO) game, an Augmented Reality (AR) server and/or other types of similar servers. . . . Server 170 may include a second game/VR server.” Also see Xu Abstract and Page 2 Paragraph 16 teaching a first virtual three-dimensional space scene (the first virtual scene), a second virtual three-dimensional space scene (the second virtual scene). Note that different games can easily have different virtual scenes, and thus the combination teaches a first application server that provides a first virtual scene and a second application server that provides a second virtual scene.) a jump server connected to the first application server and the second application server respectively, and storing transfer control logic; (See Sternberg Fig. 4 showing game/VR server 402 which is can be considered connected to Destination Game/VR Client 408 (as it transfers the game state to it). Also see Sternberg [0106], “Although not shown in FIG. 4, the Game/VR Server 402 and/or the Game/VR State Transfer Engine 406 may communicate with a Source Game/VR Client”. Lastly, note that game/VR server 402 can be considered as the jump server.) a long-term storage server connected to the first application server and the second application server respectively, and storing global data; (See Sternberg [0143], “This may be accomplished by periodically (e.g., every few seconds) compressing the game/VR state via well-known compression techniques and storing the compressed game/VR state along with the game multimedia (either multiplexed with the multimedia or separately stored by a game/VR state storage server.)” Lastly see Sternberg [0081] and [0094] game state information including “avatar”, “inventory”, and character properties such as “experience level” (global data).) the first application server determines whether a transfer request for transferring at least one virtual character from a first virtual scene to a second virtual scene has been received; (See Sternberg [0063], “Server 160 may include a game/VR server,” See Sternberg [0108], “a Source Game/VR Client 502 may send a Game/VR State Transfer Trigger message to a Game/VR Server 504. In Step 2, the Game/VR Server 504 may send a Push Game/VR State Transfer Request message to a Game/VR State Transfer Engine 506, which, in Step 3, may respond with a Push Game/VR State Transfer Request Approved message. In Step 4, the Game/VR Server 504 may then send a message with the game/VR state to the Game/VR State Transfer Engine 506. In Step 5, the Game/VR State Transfer Engine 506 may then send the state to the Spectator/Destination Game/VR Client 508 in a Delivered Game/VR State message.” Note that sending a Transfer trigger message implies, that the system can determines that there is a transfer request.) when the first application server receives the transfer request, the jump server generates transmission data associated with the virtual character, and transfers the virtual character from the first virtual scene to the second virtual scene on the basis of the transmission data and global data associated with the virtual character. (See Xu Abstract and Page 2 Paragraph 16 teaching a first virtual three-dimensional space scene (the first virtual scene), a second virtual three-dimensional space scene (the second virtual scene) and transferring virtual objects. See Sternberg [0081] and [0094] game state information including global data. Also see Sternberg [0105], “In Step 4, the Game/VR State Transfer Engine 406 may then send a request for game/VR State to the Game/VR Server 402. In Step 5, the Game/VR Server 402 may respond with the current game/VR State. In Step 6, the Game/VR State Transfer Engine 406 may then deliver the game/VR state to the Spectator Game/VR Client 408, which may be the destination client.” Lastly, see Sternberg Fig. 4 showing the transfer step, transferring game state information (the virtual character) from a Source Game/VR Client to Destination Game/VR Client. The motivation to combine would have been similar that of Claim 1 rejection motivation.) Regarding Claim 12, Sternberg in view of Xu disclose A computer-readable storage medium with a computer program stored thereon, wherein when the computer program is executed by a processor, the steps of the method of claim 1 are implemented. (See Sternberg [0204], “Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. . . Examples of computer-readable storage media include.” See Sternberg [0204], “In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor.” Also see Claim 1 rejection.) Regarding Claim 13, Sternberg in view of Xu disclose A computer program product, including a computer program, wherein when the computer program is executed by a processor, the steps of the method of claim 1 are implemented. (See Sternberg [0204], “In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer-readable medium for execution by a computer or processor.” Also see Claim 1 rejection.) Claims 3-4, 7, and 11 are rejected under 35 U.S.C. 103 as being unpatentable over Sternberg in view of Xu and in further view of Silbey (US 8869044 B2). Regarding Claim 3, Sternberg in view of Xu disclose The data processing method of claim 2, the subprogram runs the human-computer interaction function, (See Sternberg [0063] teaching a MMO game. Note that a game would obviously run human-computer interaction functions in order to be played by users.) one or more virtual characters in the same game realize human-computer interaction according to the functional logic of the subprogram. (See Sternberg [0063] teaching a MMO game. For a multiplayer game, that implies one or more virtual characters in the same game realize human-computer interaction according to the functional logic of the subprogram.) However, Sternberg in view of Xu fails to explicitly disclose The data processing method of claim 2, the subprogram runs the human-computer interaction function carrying the virtual scene through logical groups, and each logical group is bound to a unique logical group serial number, one or more virtual characters in the same logical group realize human-computer interaction according to the functional logic of the subprogram. Silbey teaches carrying the virtual scene through logical groups, and each logical group is bound to a unique logical group serial number (See Claim 1, “an online presence of a first user in a first virtual environment; identifying an online presence of at least a second user in a second virtual environment; determining whether a precondition for jumping the first user is satisfied, the precondition specifying at least one of a capacity limit or one or more requirements for entry into a virtual room, virtual world instance, or virtual world in which the online presence of the second user is located; and if the precondition is satisfied, reserving a position for the first user in the second virtual environment and presenting an indication to the first user that a jump is available, wherein the jump is invoked to move the online presence of the first user from a current location within the first virtual environment to a target location in the second virtual environment. In this case, a virtual room would correspond to a “logical group” and note that it is well known and common practice within the art that virtual rooms would have room ids and thus “each logical group is bound to a unique logical group serial number”. Silbey also teaches moving (carrying) a user from a first to second environment. In combination with Sternberg teaching the transfer of game/VR state information (noting that game/VR state information would be inclusive of the virtual scene, see Sternberg [0082], “As used herein, VR state may be defined as information about the current status of a VR environment such that the environment can be repeated or recreated (e.g., exactly recreated) for the source user following a disruption, and for transfer or cloning of the VR environment for destination users.”), the limitation of carrying the virtual scene through logical entities is taught.) one or more virtual characters in the same logical group realize human-computer interaction according to the functional logic of the subprogram. (See Col 3 Lines 6-12, “For example, to perform a "jump" across worlds the system may automatically reserve a position for the user in a destination world (also referred to herein as the "target" world), log the user out of the current world, log the user into the destination world, and create an online presence in the destination world near the friend (e.g., in the same "room" in the target world).” Once again, for a multiplayer game, that implies one or more virtual characters in the same room (logical group) would realize human-computer interaction according to the functional logic of the subprogram.) It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Sternberg in view of Xu with Silbey to include virtual rooms within games (logical groups) which have a unique logical group serial number. The motivation to combine Sternberg in view of Xu with Silbey would have been obvious as Silbey is within a similar field of transferring virtual characters from a first to a second virtual scene (see Silbey Abstract). The benefit having game rooms (logical groups) in multiplayer interactive games is obvious as it helps create containerized instances which are easier to many and not overload a server with having to deal with too many virtual characters within the same scene. Regarding Claim 4, Sternberg in view of Xu and Silbey disclose The data processing method of claim 3, the number of logical groups is set according to the number of virtual characters required to be carried by each virtual scene; (Silbey Claim 1, “determining whether a precondition for jumping the first user is satisfied, the precondition specifying at least one of a capacity limit or one or more requirements for entry into a virtual room, virtual world instance, or virtual world in which the online presence of the second user is located” See Silbey Col 3 Lines 33-35, “Each world may provide an instance of the same general virtual environment, with a collection of virtual "rooms" 126.” In this case, Silbey teaches multiple rooms and each room can have different or the same requirements. Furthermore, each room can correspond to a specific number of virtual characters. Thus, in one scenario, the number of rooms (logical groups) is set according to the number of virtual character required to be carried by each virtual scene.) the transfer step further includes: matching logical groups for the virtual characters transferred to the second virtual scene according to matching rules. (See Sternberg [0063] teaching a MMO game. Also see Silbey Col 1 Lines 22-24, “Alternatively, such environments may be game or session based, e.g., where a group of users participate in a match of a first-person shooter game.” In this case, for a multiplayer game, it is common to have match making rules and thus, matching rooms (logical groups) according to matching rules.) Regarding Claim 7, Sternberg in view of Xu and Silbey disclose The data processing method of claim 1, wherein the determination step further includes the transfer request is triggered by an interaction operation received by a preset control provided in the first virtual scene, (See Steinberg [0105], “In Step 4, the Game/VR State Transfer Engine 406 may then send a request for game/VR State to the Game/VR Server 402. In Step 5, the Game/VR Server 402 may respond with the current game/VR State.” See [0086], “The following description may include real-time game/VR state transfer. The real-time game/VR state transfer may be initiated by either a push or a pull trigger. For example, in a pull transaction, an expert player may allow his/her game state to be transferred in return for compensation. . . .” Lastly see Silbey, Col 3 Lines 1-6, “Once identified, the virtual world interface may provide an indication that the user can "jump" his or her online presence to the target location, e.g., by presenting a "jump" button placed in the user's friend list. A "jump" relocates the user to the target location, eliminating the need for the user to navigate the virtual environment(s) in search of the friend.”) wherein the preset control is associated with the second virtual scene. (See Silbey Col 3 Lines 1-6, “Once identified, the virtual world interface may provide an indication that the user can "jump" his or her online presence to the target location, e.g., by presenting a "jump" button placed in the user's friend list. A "jump" relocates the user to the target location, eliminating the need for the user to navigate the virtual environment(s) in search of the friend.” Note that the jump button (preset control) can be considered to be associated with the second virtual scene as it Silbey implies either friend is able to jump to the other, and thus for a first and second scene, a jump button would be present in both scenes, and thus “the preset control is associated with the second virtual scene”. The motivation to combine would have been similar that of Claim 3.) Regarding Claim 11, Sternberg in view of Xu and Silbey disclose The data processing system of claim 8, further including: a user device communicatively connected with the first application server, and the user interacts with at least one virtual character in the first virtual scene displayed by the user device; (See Sternberg [0074], “In some embodiments, user devices 180b and 180c may communicate with server 185 and/or service server 190 via user device 180a.” See Silbey Col 7 Lines 50-54, “In one embodiment, the client systems 710 may include existing computer systems, e.g., desktop computers, server computers, laptop computers, tablet computers, gaming consoles, hand-held gaming devices and the like.” Note that games and gaming devices would imply “the user interacts with at least one virtual character in the first virtual scene displayed by the user device”.) when the preset control in the first virtual scene displayed by the user device receives an interaction operation from the user, the transfer request is sent to the first server, wherein the preset control is associated with the second virtual scene. (See Sternberg [0086], “The following description may include real-time game/VR state transfer. The real-time game/VR state transfer may be initiated by either a push or a pull trigger. For example, in a pull transaction, an expert player may allow his/her game state to be transferred in return for compensation. . . .” See Silbey Col 3 Lines 1-6, “Once identified, the virtual world interface may provide an indication that the user can "jump" his or her online presence to the target location, e.g., by presenting a "jump" button placed in the user's friend list. A "jump" relocates the user to the target location, eliminating the need for the user to navigate the virtual environment(s) in search of the friend.” Note that the jump button (preset control) can be considered to be associated with the second virtual scene as it Silbey implies either friend is able to jump to the other, and thus for a first and second scene, a jump button would be present in both scenes, and thus “the preset control is associated with the second virtual scene”. The motivation to combine would have been similar that of Claim 3.) Allowable Subject Matter Claims 6 and 9-10 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: Regarding Claim 6, the cited prior art does not disclose or render obvious the combination of elements cited in the claims as a whole. Specifically, the cited prior art fails to disclose or render obvious the limitations: the subprogram data is only visible to the associated application server, is generated when the virtual character logs in or transfers into the application server, and is destroyed when the virtual character logs out or transfers out of the application server; the transmission data is only visible to the jump server and the application server associated with this transfer process during the corresponding transfer process; Thus, Claim 6 contains allowable subject matter. Regarding Claim 9, the cited prior art does not disclose or render obvious the combination of elements cited in the claims as a whole. Specifically, the cited prior art fails to disclose or render obvious the limitations: the subprogram data is generated when the virtual character logs in or transfers into the application server, and is destroyed when the virtual character logs out or transfers out of the application server; the transmission data is generated by the jump server after extracting subprogram data required for transfer from the application server corresponding to the first virtual scene and processing it, to support the transfer of the virtual character from the first virtual scene to the second virtual scene; the transmission data is only visible to the jump server and the application server associated with this transfer process during the corresponding transfer process; Thus, Claim 9 contains allowable subject matter. Regarding Claim 10, the cited prior art does not disclose or render obvious the combination of elements cited in the claims as a whole. Specifically, the cited prior art fails to disclose or render obvious the limitations: each virtual scene is supported by an independent application server; . . . the data processing system further includes: a management server setting the number of logical groups according to the number of virtual characters required to be carried by each virtual scene; a matching server connected to the management server and the jump server, matching logical groups for the virtual characters transferred to the second virtual scene according to matching rules. Thus, Claim 10 contains allowable subject matter. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THANG G HUYNH whose telephone number is (571)272-5432. The examiner can normally be reached Mon-Thu 7:30am-4:30pm EST | Fri 7:30am-11:30am EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611 /T.G.H./Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 12, 2024
Application Filed
Jan 28, 2026
Non-Final Rejection — §101, §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597100
DEEP IMAGE DELIGHTING
2y 5m to grant Granted Apr 07, 2026
Patent 12586309
MACHINE-LEARNING METHOD ON VECTORIZED THREE-DIMENSIONAL MODEL AND LEARNING SYSTEM THEREOF
2y 5m to grant Granted Mar 24, 2026
Patent 12581083
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR COMPRESSING TWO-DIMENSIONAL IMAGE
2y 5m to grant Granted Mar 17, 2026
Patent 12560450
METHOD AND SERVER FOR GENERATING SPATIAL MAP
2y 5m to grant Granted Feb 24, 2026
Patent 12554815
DEVICES, METHODS, AND GRAPHICAL USER INTERFACES FOR AUTHORIZING A SECURE OPERATION
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
76%
Grant Probability
99%
With Interview (+50.0%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 25 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month