DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1 – 20 are rejected under 35 U.S.C. 101 because the claimed invention is directed to non-statutory subject matter.
Step 1:
I. The claims are drawn to apparatus, process and CRM categories.
II. Thus, initially, under Step 1 of the analysis, it is noted that the claims are directed towards eligible categories of subject matter.
Step 2a:
III. Prong 1: Does the claim recite an abstract idea, law of nature, or natural phenomenon?
Representative claim 1 is analyzed below, with italicized limitations indicating recitations of an abstract idea.
A virtual scene synchronization method, the method comprising: receiving, by a server, a midway joining request that is for a virtual scene and that is transmitted by a first terminal, the midway joining request carrying an object identifier of a target virtual object, the target virtual object being a virtual object controlled by the first terminal, and the virtual scene comprising a virtual object controlled by at least one second terminal; transmitting, by the server, data of a plurality of scene image frames of the virtual scene to the first terminal, the data of the plurality of scene image frames being used to run scene progress of the virtual scene from an initial progress state to a target progress state at which the virtual scene is currently located; and in response to the first terminal running the scene progress of the virtual scene to the target progress state, transmitting, by the server, an object loading instruction carrying the object identifier to the first terminal and the at least one second terminal to enable the first terminal and the at least one second terminal to load the target virtual object in the virtual scene.
The underlined limitations fall within at least three of the groupings of abstract ideas enumerated in the 2019 PEG:
Fundamental economic principles or practices
Commercial or legal interactions
Managing personal behavior or relationships or interactions between people
The claims are directed towards incentivizing the behavior of users playing a game via group agreements or contract. This is viewed by the Examiner as a fundamental economic practice, an agreement in the form of contracts, and managing personal behavior or relationships between people, which are all considered to be abstract ideas according to the 2019 guidelines.
Prong 2: Does the Claim recite additional elements that integrate the exception in to a practical application of the exception?
iii. Although the claims recite additional limitations, such as random generator, the said additional limitations do not integrate the exception into a practical application of the exception. For example, the claims require additional limitations such as a display, processor, and interface components.
iv. These additional limitations do not represent an improvement to the functioning of a computer, or to any other technology or technical field, (MPEP 2106.05(a)). Nor do they apply the exception using a particular machine, (MPEP 2106.05(b)). Furthermore, they do not effect a transformation. (MPEP 2106.05(c)). Rather, these additional limitations amount to an instruction to “apply” the judicial exception using a computer as a tool to perform the abstract idea.
Step 2b:
Under Step 2B, the claims do not include additional elements that are sufficient to amount to significantly more than the judicial exception because they amount to conventional and routine computer implementation and mere instructions for implementing the abstract idea on generic computing devices.
For example, the claim language does recite additional elements such as a server, viewed as a whole, are indistinguishable from conventional computing elements known in the art. Therefore, the additional elements fail to supply additional elements that yield significantly more than the underlying abstract idea. Viewing the limitations as an ordered combination adds nothing that is not already present when looking at the elements taken individually. There is no indication that the combination of elements improves the functioning of a computer or improves any other technology.
For these reasons, it appears that the claims are not patent-eligible under 35 USC §101.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1 – 20 are rejected under 35 U.S.C. 102(a) as being anticipated by the video game The Division (released March 2016).
Regarding claim 1, The Division, hereinafter Division, discloses a virtual scene synchronization method, (1:42:56 – 1:44:33 of NPL), receiving, by a server, a midway joining request that is for a virtual scene and that is transmitted by a first terminal, the midway joining request carrying an object identifier of a target virtual object, (1:42:56 – 1:44:33 of NPL), the target virtual object being a virtual object controlled by the first terminal, (1:42:56 – 1:44:33 of NPL), and the virtual scene comprising a virtual object controlled by at least one second terminal, (1:49:26 of NPL), transmitting, by the server, data of a plurality of scene image frames of the virtual scene to the first terminal, the data of the plurality of scene image frames being used to run scene progress of the virtual scene from an initial progress to a target progress at which the virtual scene is currently located, (1:42:56 – 2:06:37 of NPL), and in response to the first terminal running the scene progress of the virtual scene to the target progress state, transmitting, by the server, an object loading instruction carrying the object identifier to the first terminal and the at least one second terminal to enable the first terminal and the at least one second terminal to load the target virtual object in the virtual scene, (2:06:37 – 2:06:50 of NPL).
Regarding claims 2 and 19, Division discloses wherein the transmitting, by the server, data of the plurality of scene image frames of the virtual scene to the first terminal comprises: obtaining, by the server, the data of the plurality of scene image frames, wherein the data of the scene image frames comprises a first type of data usable by first terminal to perform operation logic; and transmitting, by the server, the first type of data of the plurality of scene image frames to the first terminal, (1:42:56 – 1:44:33 of NPL).
Regarding claim 3, Division discloses wherein the data of the scene image frames further comprises a second type of data usable to perform rendering logic; and the transmitting, by the server, data of a plurality of scene image frames of the virtual scene to the first terminal comprises: obtaining, by the server, a plurality of target scene image frames in a target frame quantity range from the plurality of scene image frames; and transmitting, by the server, the second type of data of the plurality of target scene image frames to the first terminal, (1:42:56 – 2:06:37 of NPL).
Regarding claim 4, Division discloses wherein the data of the scene image frames further comprises a second type of data usable to perform rendering logic; and the method comprises: determining, by the server, a plurality of pieces of rendering content based on the data of the plurality of scene image frames; obtaining, by the server, from the data of the plurality of scene image frames, data of target rendering content, wherein a rendering priority of the target rendering content is higher than a target priority, and the data of the target rendering content is partial data of a second type of data of a scene image frame in which the target rendering content is located; and transmitting, by the server, the data of the target rendering content to the first terminal, (1:42:56 – 2:06:37 of NPL).
Regarding claim 5, Division discloses receiving, by the server and from the first terminal, a scene exit request that is for the virtual scene, wherein the scene exit request carries the object identifier of the target virtual object; transmitting, by the server to the first terminal, an exit verification instruction, wherein the exit verification instruction indicates to determine whether a state of the target virtual object is an exitable state; and in response to receiving an object settlement request originating from the first terminal, transmitting, by the server, an object settlement instruction to the first terminal and the at least one second terminal, wherein the object settlement instruction indicates the first terminal to report current attribute information of the target virtual object, and indicates the at least one second terminal to remove the target virtual object from the virtual scene, (1:42:56 – 1:44:33 of NPL).
Regarding claim 6, Division discloses marking, by the server, any first virtual object in the virtual scene with a first marker, wherein the first virtual object is a virtual object present when the scene progress of the virtual scene is the initial progress state, and the first marker indicates that the first virtual object is always present in the virtual scene, (1:42:56 – 2:06:37 of NPL).
Regarding claims 7 - 10, Division discloses in response to the midway joining request, modifying, by the server, the first marker of the target virtual object to a second marker in a case that the target virtual object is the first virtual object, wherein the second marker indicates that the target virtual object exited the virtual scene and joins the virtual scene this time; and in a case that the target virtual object is not the first virtual object, marking, by the server, the target virtual object with a third marker, wherein the third marker indicates that the target virtual object joins the virtual scene this time, (1:49:26 – 1:50:00 of NPL).
Regarding claims 11 – 15, Division discloses wherein the data of the scene image frames comprises a first type of data, and the first type of data is used to perform operation logic; and the running, by the first terminal based on the data of the plurality of scene image frames, scene progress of the virtual scene from the initial progress state to the target progress state comprises: loading, by the first terminal, the virtual scene obtained when the scene progress is the initial progress state; obtaining, by the first terminal, the first type of data of the plurality of scene image frames from the data of the plurality of scene image frames; and running, by the first terminal based on the first type of data of the plurality of scene image frames, the scene progress of the virtual scene from the initial progress state to the target progress state, (1:42:56 – 2:06:37 of NPL).
Regarding claim 16, Division discloses wherein the method further comprises: determining, by the first terminal, a state of the target virtual object in response to an exit operation on the virtual scene; and transmitting, by the first terminal to a server, current attribute information of the target virtual object in a case that the state of the target virtual object is an exitable state, (2:06:37 – 2:06:50 of NPL).
Regarding claim 17, Division discloses wherein the loading, by the first terminal, the target virtual object in the virtual scene comprises: preloading, by the first terminal, the target virtual object in the virtual scene before the scene progress of the virtual scene is loaded to the target progress state, (1:42:56 – 1:44:33 of NPL).
Regarding claim 18, Division discloses one or more non-transitory computer readable media storing computer readable instructions, which, when executed by a processor, configure a data processing system to perform receiving a midway joining request that is for a virtual scene and that is transmitted by a first terminal, the midway joining request carrying an object identifier of a target virtual object, (1:42:56 – 1:44:33 of NPL), the target virtual object being a virtual object controlled by the first terminal, (1:42:56 – 1:44:33 of NPL), and the virtual scene comprising a virtual object controlled by at least one second terminal, (1:49:26 of NPL), transmitting, by the server, data of a plurality of scene image frames of the virtual scene to the first terminal, the data of the plurality of scene image frames being used to run scene progress of the virtual scene from an initial progress to a target progress at which the virtual scene is currently located, (1:42:56 – 2:06:37 of NPL), and in response to the first terminal running the scene progress of the virtual scene to the target progress state, transmitting, by the server, an object loading instruction carrying the object identifier to the first terminal and the at least one second terminal to enable the first terminal and the at least one second terminal to load the target virtual object in the virtual scene, (2:06:37 – 2:06:50 of NPL).
Regarding claim 20, Division discloses one or more non-transitory computer readable media storing computer readable instructions, which, when executed by a processor, configure a data processing system to perform obtaining data of a plurality of scene image frames of a virtual scene in response to a midway joining operation on the virtual scene, the midway joining operation indicating to join a target virtual object to the virtual scene; running, based on the data of the plurality of scene image frames, scene progress of the virtual scene from an initial progress state to a target progress state, the target progress state being scene progress at which the virtual scene is currently located; and loading the target virtual object in the virtual scene, (1:42:56 – 2:06:37 of NPL).
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERIC M THOMAS whose telephone number is (571)272-1699. The examiner can normally be reached 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, David Lewis can be reached at 571-272-7673. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/E.M.T/Examiner, Art Unit 3715 /DAVID L LEWIS/Supervisory Patent Examiner, Art Unit 3715