Prosecution Insights
Last updated: April 19, 2026
Application No. 18/688,701

INFORMATION INTERACTION METHOD, DEVICE, APPARATUS AND MEDIUM BASED ON AUGMENTED REALITY

Non-Final OA §103§112
Filed
Mar 01, 2024
Examiner
GRAY, RYAN M
Art Unit
2611
Tech Center
2600 — Communications
Assignee
BEIJING BYTEDANCE NETWORK TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
88%
Grant Probability
Favorable
1-2
OA Rounds
2y 2m
To Grant
98%
With Interview

Examiner Intelligence

Grants 88% — above average
88%
Career Allow Rate
589 granted / 672 resolved
+25.6% vs TC avg
Moderate +11% lift
Without
With
+10.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 2m
Avg Prosecution
18 currently pending
Career history
690
Total Applications
across all art units

Statute-Specific Performance

§101
7.4%
-32.6% vs TC avg
§103
68.4%
+28.4% vs TC avg
§102
8.3%
-31.7% vs TC avg
§112
3.5%
-36.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 672 resolved cases

Office Action

§103 §112
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Interpretation The scope of the pending claims is interpreted as follows: Claims 11, 12, 16-18 are interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Claims 1-10, 13-15, 19-20 are given their broadest reasonable interpretation. The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: module in claims 11, 12, 16-18 Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Use of indicates a limitation is not explicitly disclosed by the reference alone. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Fajt (US 2019/0379765) in view of Kawamae (US 2022/0114792) Claim 1 Examiner’s Interpretation: The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. Claim Mapping: Fajt discloses an information interaction method based on augmented reality and applied to a first client terminal, the method comprising: PNG media_image1.png 717 587 media_image1.png Greyscale generating first interactive data in response to an interactive operation of a first virtual target in a virtual reality space, and sending the first interactive data to a first server terminal (Fajt, ¶ 46: “the first endpoint system 302 can send a single notification to the communication relay server 310”); receiving second interactive data corresponding to a second virtual target sent by the first server terminal, wherein the second virtual target and the first virtual target share the virtual reality space (¶ 51: “In some embodiments, the communication relay engine 402 is configured to receive notifications from endpoint systems, and to re-transmit those notifications to other endpoint systems. In some embodiments, the state monitoring engine 406 is configured to manage state information held within the state data store 404. In some embodiments, the state monitoring engine 406 may review the notifications received by the communication relay engine 402, and may store information from the notifications in the state data store 404. In some embodiments, the state monitoring engine 406 may ignore information that is ephemeral (including but not limited to location information from location change notifications associated with moving objects), because it will change too quickly to be usefully stored. In some embodiments, the state monitoring engine 406 may wait to store location information in the state data store 404 until the location change notifications indicate that a previously moving object has come to rest. In some embodiments, the state monitoring engine 406 may store information from notifications that is not ephemeral (or at least that changes on a less-frequent basis), such as whether an avatar is present in the shared virtual environment, a score for a game being played, and/or the like. Though each endpoint system should be receiving the notifications from the communication relay engine 402, storing data in the state data store 404 allows an endpoint system that joins the shared virtual environment later to receive initial status upon joining, instead of having to wait to receive notifications from the various endpoint systems to know what objects to present.”); and calling a physical engine to render the interactive operations of the first virtual target and the second virtual target in the virtual reality space based on the first interactive data and the second interactive data (e.g. based on calculated hand positions; Fajt, ¶ 92: “detecting collisions between objects in a virtual environment involves determining whether the objects intersect with each other within the space of the virtual environment. Collision detection can be implemented in different ways depending on factors such as overall system preferences, bandwidth or processing restrictions, gameplay design, and the like. As one example, accurate models of objects including any irregular features, such as fingers of a hand, may be used in collision detection. This type of collision detection may provide more realistic object interactions, which may be desirable in some scenarios, including complex collaborative gestures such as a handshake involving hooked thumbs.”); and generating and displaying an interactive rendering result (Fajt, ¶ 38: “move the avatar associated with the endpoint system within the shared virtual environment 102, and to move the viewpoint rendered by the head-mounted display device 206 within the shared virtual environment 102. Further details regarding each of these components are provided below.”) Fajt does not explicitly disclose where the first and second interactive data are handled. However, Kawamae discloses an synchronization of interaction on a server: PNG media_image2.png 887 549 media_image2.png Greyscale Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to process interaction data server side. One of ordinary skill in the art would have motivation to synchronize data across multiple devices while saving bandwidth. One of ordinary skill in the art would have had a reasonable expectation of success because both systems utilizes a client server model. Claim 2 Examiner’s Interpretation: Interpretation of “Aim Physical State”: The claim term does not have any apparent meaning in the art. Applicant’s specification does not explicitly define the scope of the claim term. Cited examples roughly correspond to direction, target, coordinates and the like: “For example, for a case that both the first virtual target and the second virtual target perform a kicking operation on the same football and a kicking force of the first virtual target is greater than that of the second virtual target, the first server terminal may comprehensively calculate the kicking force and kicking direction in the first aim physical engine state information and the second aim physical engine state information according to actual motion law that the kicking force of the first virtual target is greater and the movement of the football is more the same as the kicking operation of the first virtual target, but will be affected by the kicking operation of the second virtual target” (Specification, ¶ 128) Claim Mapping: Fajt discloses wherein the interactive data is aim physical engine state information (Fajt, Fig. 1; ¶ 89: PNG media_image3.png 144 218 media_image3.png Greyscale “the location change notifications may include information such as an absolute location specified in a coordinate system of the shared virtual environment, a relative location compared to a previous location, a timestamp, and/or the like”) Claim 3 Examiner’s Interpretation: Interpretation of “historical physical engine state”: Applicant’s specification defines this claim term as “Historical physical engine state information refers to physical engine state information at the moment before the current moment, that is, physical engine state information generated by an interactive operation at the moment before the current moment.” (Specification, ¶ 83) Claim Mapping: Fajt discloses wherein the aim physical engine state information includes historical physical engine state information and current physical engine state information (Fajt, Fig. 1; ¶ 89: “the location change notifications may include information such as an absolute location specified in a coordinate system of the shared virtual environment, a relative location compared to a previous location, a timestamp, and/or the like”) Claim 4 Examiner’s Interpretation: Interpretation of “fusion physical engine state”: The claim term does not have any apparent meaning in the art. Applicant’s specification does not explicitly define the scope of the claim term. Applicant’s specification gives examples of fusion physical state roughly corresponding to processing the state information to determine a type of interaction: “corresponding to the above interaction situations of dragging and leaving, the first server terminal may process the first aim physical engine state information into the first fusion physical engine state information of having a dragging action but without moving forward or backward of the dragged virtual target, and maintain the action of the second virtual target leaving the virtual reality space, that is, taking the second aim physical engine state information as the second fusion physical engine state information. After that, the first server terminal sends the first fusion physical engine state information and the second fusion physical engine state information” (Specification, ¶ 87) Claim Mapping: Fajt discloses wherein, after the first interactive data is sent to the first server terminal, the method further comprises: receiving first fusion physical engine state information corresponding to the first virtual target (Fajt, ¶ 37: “The shared virtual environment 102 is a virtual room in which two or more users may interact with each other and/or with objects within the shared virtual environment through avatars…a left hand 106 and a right hand 108.”) and second fusion physical engine state information corresponding to the second virtual target (Fajt, ¶ 37: “a left hand 116, a right hand 118…In the illustrated scene, the first avatar has just thrown a ball 112 towards the second avatar.”) sent by the first server terminal, wherein the fusion physical engine state information is obtained based on the first aim physical engine state information and the second aim physical engine state information (e.g. based on calculated hand positions; Fajt, ¶ 92: “detecting collisions between objects in a virtual environment involves determining whether the objects intersect with each other within the space of the virtual environment. Collision detection can be implemented in different ways depending on factors such as overall system preferences, bandwidth or processing restrictions, gameplay design, and the like. As one example, accurate models of objects including any irregular features, such as fingers of a hand, may be used in collision detection. This type of collision detection may provide more realistic object interactions, which may be desirable in some scenarios, including complex collaborative gestures such as a handshake involving hooked thumbs.”); the calling the physical engine to render the interactive operations of the first virtual target and the second virtual target in the virtual reality space based on the first interactive data and the second interactive data, and generating and displaying the interactive rendering result comprises: based on the first fusion physical engine state information and the second fusion physical engine state information, calling the physical engine to render the interactive operations of the first virtual target and the second virtual target in the virtual reality space, and generating and displaying the interactive rendering result (Fajt, ¶¶ 29-34, 38: “For example, a collaborative gesture may be defined by a series of states that are moved through rapidly based on the geometry of the individual movements…{avatar hands of two users are not colliding}… {hands are colliding}…{hands are no longer colliding}…after recognizing a collaborative gesture, an endpoint system can transmit a corresponding notification to other devices on the network…The endpoint computing device 208 may use the detected positions and/or motions of the head-mounted display device 206 to move the avatar associated with the endpoint system within the shared virtual environment 102, and to move the viewpoint rendered by the head-mounted display device 206 within the shared virtual environment 102”) Claim 5 Examiner’s Interpretation: Interpretation of “attribute setting interface”: The claim term is interpretated as an interface for changing aspects of a virtual object (e.g. Fig. 4): PNG media_image4.png 765 471 media_image4.png Greyscale “an interface that provides attribute fields and their corresponding input boxes, and may also be an interactive three-dimensional object model that provides functions such as dragging, moving and modifying dimension and so on. The first user inputs furniture addition attribute information such as position, size, style and color of the added furniture through the furniture attribute setting interface” (Specification, ¶ 92). Claim Mapping: Fajt does not explicitly disclose, but Kawamae discloses wherein generating the first interactive data in response to the interactive operation of the first virtual target in the virtual reality space comprises: displaying an object attribute setting interface in response to the interactive operation of the first virtual target on a virtual object in the virtual reality space; and in response to an input operation to the object attribute setting interface, obtaining object operation attribute information of the virtual object, and generating the first interactive data based on the object operation attribute information (Fig. 12B PNG media_image5.png 528 597 media_image5.png Greyscale “In S155, the display parameters when the AR object is displayed are set. Thus, the display position, the size, and the direction of the AR object with respect to the object are given. That is, positioning with respect to the object can be performed by giving an offset to the position of a certain feature point of the selected object. In a background object having a feature point which is not clear, such as the region B, any point in the region may be selected and the selected point may be used as a pseudo feature point. The set display parameter is registered as linking data of the AR object in the imaged object data 92 or the background object data 93. FIG. 12B illustrates a specific AR object edit screen.”) Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to use an attribute setting interface. One of ordinary skill in the art would have motivation to provide a means to edit virtual objects. One of ordinary skill in the art would have had a reasonable expectation of success because both systems evaluate collaborative interactions to determine which change to apply. Claim 6 Examiner’s Interpretation: Interpretation of “space address”: The claim term does not have any apparent meaning in the art. Applicant’s specification does not explicitly define the scope of the claim term or give explicit examples. The claim term appears to generally refer to a generic reference to a shared virtual space. Claim Mapping: Fajt discloses wherein, before receiving the second interactive data corresponding to the second virtual target sent by the first server terminal, the method further comprises: sending a space address of the virtual reality space to a second server terminal, so that the second server terminal sends the space address to a second client terminal corresponding to the second virtual target, and in response to a space sharing operation of the second client terminal, schedules an aim server corresponding to the first server terminal for the second client terminal (e.g. space address corresponds to data needed to join a particular session; A user may join a virtual session with one or more other users, facilitated by a second server to handle messaging; ¶ 46, 51: “To solve this problem, the first endpoint system 302 can send a single notification to the communication relay server 310, and the communication relay server 310 sends it to the other endpoint systems. This helps conserve the limited upload bandwidth available to the first endpoint system 302. Further details of how this transmission may take place are provided below in FIG. 8 and the accompanying text….communication relay engine 402, storing data in the state data store 404 allows an endpoint system that joins the shared virtual environment later to receive initial status upon joining”). Claim 7 Examiner’s Interpretation: Interpretation of “personal attribute information”: The claim term does not have any apparent meaning in the art. Applicant’s specification does not explicitly define the scope of the claim term. Applicant’s specification gives examples such as: virtual target refers to a virtual character of a user in the virtual reality space. The first virtual target refers to a virtual target corresponding to a first user. In some embodiments, the first virtual target is constructed based on person attribute information of the first user. The person attribute information may include height, gender, hairstyle, clothing, etc. The first user may input person attribute information through an electronic apparatus; or shoot his/her own image through a camera of the electronic apparatus, and get the person attribute information by performing a process of target recognition and so on on his/her own image; or scan his/her own body part through a radar sensor of the electronic apparatus to generate point cloud data, and obtain the person attribute information through processing the point cloud data. After that, the electronic apparatus may upload the person attribute information to the first server terminal. The first server terminal uses the person attribute information to construct a three-dimensional model of the character to obtain the first virtual target. This may increase connection between users and the virtual reality space, improve the visual effect of the users and further enhance the user experience. (Specification, ¶ 62) Claim Mapping: Fajt discloses wherein the virtual reality space is constructed based on a real space where the first virtual target is located, and the first virtual target and the second virtual target are constructed based on personal attribute information of a first user and a second user, respectively (e.g. customizable avatars; ¶ 54: “In some embodiments, the user data engine 452 is configured to manage user data within the user data store 458. Some non-limiting examples of user data include unique user identifiers, login and password information, contact information, avatar customization information, preferences”) Claim 8 Fajt discloses an information interaction method based on augmented reality and applied to a first server terminal (Fig. 3; ¶ 27: “Virtual environments such as virtual reality environments, augmented reality environments, and the like”), the method comprising: receiving first interactive data and second interactive data, respectively, wherein the first interactive data is generated by an interactive operation of a first virtual target in a virtual reality space, and the second interactive data is generated by an interactive operation of a second virtual target in the virtual reality space, and the first virtual target and the second virtual target share the virtual reality space (e.g. first interaction and second interaction corresponding the different users (e.g. user 110 and user 120); (¶ 51: “In some embodiments, the communication relay engine 402 is configured to receive notifications from endpoint systems, and to re-transmit those notifications to other endpoint systems. In some embodiments, the state monitoring engine 406 is configured to manage state information held within the state data store 404. In some embodiments, the state monitoring engine 406 may review the notifications received by the communication relay engine 402, and may store information from the notifications in the state data store 404. In some embodiments, the state monitoring engine 406 may ignore information that is ephemeral (including but not limited to location information from location change notifications associated with moving objects), because it will change too quickly to be usefully stored. In some embodiments, the state monitoring engine 406 may wait to store location information in the state data store 404 until the location change notifications indicate that a previously moving object has come to rest. In some embodiments, the state monitoring engine 406 may store information from notifications that is not ephemeral (or at least that changes on a less-frequent basis), such as whether an avatar is present in the shared virtual environment, a score for a game being played, and/or the like. Though each endpoint system should be receiving the notifications from the communication relay engine 402, storing data in the state data store 404 allows an endpoint system that joins the shared virtual environment later to receive initial status upon joining, instead of having to wait to receive notifications from the various endpoint systems to know what objects to present.”); and sending the first interactive data and the second interactive data to a first client terminal corresponding to the first virtual target and a second client terminal corresponding to the second virtual target, so that the first client terminal and the second client terminal respectively call a physical engine to render the interactive operation based on the first interactive data and the second interactive data, and generate and display an interactive rendering result (Fajt, 51: “To solve this problem, the first endpoint system 302 can send a single notification to the communication relay server 310, and the communication relay server 310 sends it to the other endpoint systems. This helps conserve the limited upload bandwidth available to the first endpoint system 302. Further details of how this transmission may take place are provided below in FIG. 8 and the accompanying text….communication relay engine 402, storing data in the state data store 404 allows an endpoint system that joins the shared virtual environment later to receive initial status upon joining”). Fajt does not explicitly disclose where the first and second interactive data are handled. However, Kawamae discloses an synchronization of interaction on a server: PNG media_image2.png 887 549 media_image2.png Greyscale Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to process interaction data server side. One of ordinary skill in the art would have motivation to synchronize data across multiple devices while saving bandwidth. One of ordinary skill in the art would have had a reasonable expectation of success because both systems utilizes a client server model. Claim 9 Fajt further discloses wherein, after receiving the first interactive data and the second interactive data, respectively, the method further comprises: in a case that the interactive data is aim physical engine state information and it is determined that there is an intersection between the first aim physical engine state information and the second aim physical engine state information, generating first fusion physical engine state information corresponding to the first virtual target and second fusion physical engine state information corresponding to the second virtual target, based on the first aim physical engine state information and the second aim physical engine state information (Fajt, ¶¶ 29-34, 38: “For example, a collaborative gesture may be defined by a series of states that are moved through rapidly based on the geometry of the individual movements…{avatar hands of two users are not colliding}… {hands are colliding}…{hands are no longer colliding}…after recognizing a collaborative gesture, an endpoint system can transmit a corresponding notification to other devices on the network…The endpoint computing device 208 may use the detected positions and/or motions of the head-mounted display device 206 to move the avatar associated with the endpoint system within the shared virtual environment 102, and to move the viewpoint rendered by the head-mounted display device 206 within the shared virtual environment 102”), sending the first interactive data and the second interactive data to the first client terminal corresponding to the first virtual target and the second client terminal corresponding to the second virtual target comprises: sending the first fusion physical engine state information and the second fusion physical engine state information to the first client terminal and the second client terminal (Fajt, ¶¶ 29-34, 38: “For example, a collaborative gesture may be defined by a series of states that are moved through rapidly based on the geometry of the individual movements…{avatar hands of two users are not colliding}… {hands are colliding}…{hands are no longer colliding}…after recognizing a collaborative gesture, an endpoint system can transmit a corresponding notification to other devices on the network…The endpoint computing device 208 may use the detected positions and/or motions of the head-mounted display device 206 to move the avatar associated with the endpoint system within the shared virtual environment 102, and to move the viewpoint rendered by the head-mounted display device 206 within the shared virtual environment 102”) Claim 10 Examiner’s Interpretation: The claim limitations are interpreted in the alternative because of the use of the claim language or and because each limitation covers different subject matter that can be performed separately in the context of the claim. Claim Mapping: Fajt as modified by Kawamae discloses wherein the generating the first fusion physical engine state information corresponding to the first virtual target and the second fusion physical engine state information corresponding to the second virtual target based on the first aim physical engine state information and the second aim physical engine state information comprises: generating the first fusion physical engine state information and the second fusion physical engine state information based on the first aim physical engine state information and the second aim physical engine state information, according to preset priorities corresponding to the first virtual target and the second virtual target (e.g. Kawamae’s interaction priority; “The application server 52 performs a synchronization and priority process S104a,”); or, in a case that the first virtual target and the second virtual target perform interactive operations having an interactive order, generating the first fusion physical engine state information and the second fusion physical engine state information based on the first aim physical engine state information and the second aim physical engine state information according to the interactive order (e.g. sequential gestures; Fajt, ¶¶ 29-34, 38: “For example, a collaborative gesture may be defined by a series of states that are moved through rapidly based on the geometry of the individual movements…{avatar hands of two users are not colliding}… {hands are colliding}…{hands are no longer colliding}; or, generating the first fusion physical engine state information and the second fusion physical engine state information based on a value of a same state variable or priorities of different state variables in the first aim physical engine state information and the second aim physical engine state information (e.g. handshake collision state; (Fajt, ¶¶ 29-34, 38: “For example, a collaborative gesture may be defined by a series of states that are moved through rapidly based on the geometry of the individual movements…{avatar hands of two users are not colliding}… {hands are colliding}…{hands are no longer colliding}. Claim 11 Examiner’s Interpretation: Claims 11, 12, 16-18 are interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Claim Mapping: The same teachings and rationales in claim 1 are applicable to claim 11. Claim 12 Examiner’s Interpretation: Claims 11, 12, 16-18 are interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Claim Mapping: The same teachings and rationales in claim 8 are applicable to claim 11. Claim 13 The same teachings and rationales in claim 1 are appliable to claim 13 with Fajt disclosing a corresponding electronic apparatus, comprising: a processor; and a memory, configured to store an executable instruction, wherein the processor is configured to read the executable instruction from the memory and execute the executable instruction to realize the information interaction method based on augmented reality and applied to the first client terminal described in claim 1 (Fig. 3) Claim 14 Examiner’s Interpretation: Machine readable media can encompass forms of signal transmission media that falls outside of the four statutory categories of invention. MPEP 2106; citing In re Nuijten, 500 F.3d 1346, 84 USPQ2d 1495 (Fed. Cir. 2007). A claim whose BRI covers both statutory and non-statutory embodiments embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter. MPEP 2106. Applicant’s specification defines computer readable media at paragraph 165 as: It should be noted that the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination thereof. For example, the computer-readable storage medium may be, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of the computer-readable storage medium may include but not be limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any appropriate combination of them. In the present disclosure, the computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in combination with an instruction execution system, apparatus or device Claim 14 as drafted recites a computer-readable storage medium (Fajt, par 13). Because Applicant’s specification explicitly defines storage medium and signal medium as non-overlapping forms of computer readable media, the broadest reasonable interpretation of the claimed storage medium in view of Applicant’s specification covers only eligible subject matter. Claim Mapping: The same teachings and rationales in claim 1 are appliable to claim 14. Claim 15 Fajt further discloses wherein, after the first interactive data is sent to the first server terminal, the method further comprises: receiving first fusion physical engine state information corresponding to the first virtual target and second fusion physical engine state information corresponding to the second virtual target sent by the first server terminal (Fajt, ¶ 37: “The shared virtual environment 102 is a virtual room in which two or more users may interact with each other and/or with objects within the shared virtual environment through avatars…a left hand 106 and a right hand 108.”; Fajt, ¶ 37: “a left hand 116, a right hand 118…In the illustrated scene, the first avatar has just thrown a ball 112 towards the second avatar.”), wherein the fusion physical engine state information is obtained based on first aim physical engine state information and the aim physical engine state information (e.g. based on calculated hand positions; Fajt, ¶ 92: “detecting collisions between objects in a virtual environment involves determining whether the objects intersect with each other within the space of the virtual environment. Collision detection can be implemented in different ways depending on factors such as overall system preferences, bandwidth or processing restrictions, gameplay design, and the like. As one example, accurate models of objects including any irregular features, such as fingers of a hand, may be used in collision detection. This type of collision detection may provide more realistic object interactions, which may be desirable in some scenarios, including complex collaborative gestures such as a handshake involving hooked thumbs.”); the calling the physical engine to render the interactive operations of the first virtual target and the second virtual target in the virtual reality space based on the first interactive data and the second interactive data, and generating and displaying the interactive rendering result comprises: based on the first fusion physical engine state information and the second fusion physical engine state information, calling the physical engine to render the interactive operations of the first virtual target and the second virtual target in the virtual reality space, and generating and displaying the interactive rendering result (Fajt, ¶¶ 29-34, 38: “For example, a collaborative gesture may be defined by a series of states that are moved through rapidly based on the geometry of the individual movements…{avatar hands of two users are not colliding}… {hands are colliding}…{hands are no longer colliding}…after recognizing a collaborative gesture, an endpoint system can transmit a corresponding notification to other devices on the network…The endpoint computing device 208 may use the detected positions and/or motions of the head-mounted display device 206 to move the avatar associated with the endpoint system within the shared virtual environment 102, and to move the viewpoint rendered by the head-mounted display device 206 within the shared virtual environment 102”) Claim 16 Examiner’s Interpretation: Claims 11, 12, 16-18 are interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Claim Mapping: The same teachings and rationales in claim 2 are applicable to claim 16. Claim 17 Examiner’s Interpretation: Claims 11, 12, 16-18 are interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Claim Mapping: The same teachings and rationales in claim 3 are applicable to claim 17. Claim 18 Examiner’s Interpretation: Claims 11, 12, 16-18 are interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Claim Mapping: The same teachings and rationales in claim 4 are applicable to claim 18. Claim 19 The same teachings and rationales in claim 8 are appliable to claim 19. Claim 20 Examiner’s Interpretation: Machine readable media can encompass forms of signal transmission media that falls outside of the four statutory categories of invention. MPEP 2106; citing In re Nuijten, 500 F.3d 1346, 84 USPQ2d 1495 (Fed. Cir. 2007). A claim whose BRI covers both statutory and non-statutory embodiments embraces subject matter that is not eligible for patent protection and therefore is directed to non-statutory subject matter. MPEP 2106. Applicant’s specification defines computer readable media at paragraph 165 as: It should be noted that the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination thereof. For example, the computer-readable storage medium may be, but not limited to, an electric, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof. More specific examples of the computer-readable storage medium may include but not be limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any appropriate combination of them. In the present disclosure, the computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in combination with an instruction execution system, apparatus or device Claim 20 as drafted recites a computer-readable storage medium (Fajt, par 13). Because Applicant’s specification explicitly defines storage medium and signal medium as non-overlapping forms of computer readable media, the broadest reasonable interpretation of the claimed storage medium in view of Applicant’s specification covers only eligible subject matter. Claim Mapping: The same teachings and rationales in claim 8 are appliable to claim 20. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN M GRAY whose telephone number is (571)272-4582. The examiner can normally be reached on Monday through Friday, 9:00am-5:30pm (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RYAN M GRAY/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Mar 01, 2024
Application Filed
Feb 21, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597216
ARTIFICIAL INTELLIGENCE VIRTUAL MAKEUP METHOD AND DEVICE USING MULTI-ANGLE IMAGE RECOGNITION
2y 5m to grant Granted Apr 07, 2026
Patent 12586252
METHOD FOR ENCODING THREE-DIMENSIONAL VOLUMETRIC DATA
2y 5m to grant Granted Mar 24, 2026
Patent 12572892
SYSTEMS AND METHODS FOR VISUALIZATION OF UTILITY LINES
2y 5m to grant Granted Mar 10, 2026
Patent 12561928
SYSTEMS AND METHODS FOR CALCULATING OPTICAL MEASUREMENTS AND RENDERING RESULTS
2y 5m to grant Granted Feb 24, 2026
Patent 12542946
REMOTE PRESENTATION WITH AUGMENTED REALITY CONTENT SYNCHRONIZED WITH SEPARATELY DISPLAYED VIDEO CONTENT
2y 5m to grant Granted Feb 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
88%
Grant Probability
98%
With Interview (+10.9%)
2y 2m
Median Time to Grant
Low
PTA Risk
Based on 672 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month