Prosecution Insights
Last updated: April 19, 2026
Application No. 18/460,068

INFORMATION INTERACTION METHOD AND APPARATUS, ELECTRONIC DEVICE AND STORAGE MEDIUM

Non-Final OA §103
Filed
Sep 01, 2023
Examiner
CHEN, YU
Art Unit
2613
Tech Center
2600 — Communications
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
2y 10m
To Grant
98%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
711 granted / 1052 resolved
+5.6% vs TC avg
Strong +30% interview lift
Without
With
+29.9%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
110 currently pending
Career history
1162
Total Applications
across all art units

Statute-Specific Performance

§101
2.2%
-37.8% vs TC avg
§103
43.9%
+3.9% vs TC avg
§102
27.0%
-13.0% vs TC avg
§112
20.7%
-19.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1052 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 11/04/2025 has been entered. Response to Amendment This is in response to applicant’s amendment/response filed on 11/04/2025, which has been entered and made of record. Claims 1, 10, 19 have been amended. No claim has been cancelled. No claim has been added. Claims 1-19 are pending in the application. As an initial matter, the rejection under 35 USC 112 for claim 2 and 11 has been withdrawn in view of applicant's amendments. Response to Arguments Applicant's arguments filed on 08/01/2025 have been fully considered but they are not persuasive. Applicant submits “First, Mei does not disclose the "Conditional Trigger Mechanism Based on Spatial Relationship Determination." Mei discloses an effect distribution method based on instructions and logical attributes. The sole prerequisite for its special effect display is the reception of an "effect generation instruction," after which the system unconditionally displays the effect at predetermined locations (such as the individual half-length portrait and the camp background). In contrast, the inventive concept of the present application lies in "conditional triggering based on real-time spatial relationship determination." The system must continuously detect the spatial relationship between the "target control part" controlled by a user and the "preset first special effect element" serving as a scene element, and trigger the second special effect only when this relationship meets a preset condition. This constitutes a dynamic process of spatial state detection and response. The Office Action erroneously interprets the static description of the final effect position in Mei - "on a half-length portrait" - as equivalent to the dynamic spatial relationship judgment that serves as a triggering condition in the present application. This is technically unreasonable.” (Remarks, Page 9). The examiner disagrees with Applicant’s premises and conclusion. Mei ¶0105, recites “if description is made by using an example in which virtual objects are displayed in the object presentation interface in the form of identity pictures, the terminal may generate the first special effect on an identity picture of the first virtual object in the object presentation interface. In this implementation, the terminal can display the special effect on the identity picture of the first virtual object, so that other terminals participating in the target battle can clearly learn that the terminal corresponding to the first virtual object triggers the special effect, and man-machine interaction efficiency is relatively high. In some embodiments, the terminal determines a second virtual object from the plurality of virtual objects, where the second virtual object and the first virtual object belong to the same camp; and generates the second special effect in a background region in which identity pictures of the first virtual object and the second virtual object are located. For example, the first special effect is to display a “converged” lighting effect on a half-length portrait of the first virtual object, the second special effect is to display a ribbon lighting effect similar to “galaxy” on a background region of virtual objects belonging to the camp, and the first special effect and the second special effect may be collectively referred to as a “backdrop” special effect.”. Mei discloses “the first special effect is to display a “converged” lighting effect on a half-length portrait of the first virtual object, the second special effect is to display a ribbon lighting effect similar to “galaxy” on a background region of virtual objects belonging to the camp, and the first special effect and the second special effect may be collectively referred to as a “backdrop” special effect.”. Thus, Mei discloses the first special effects on a half-length portrait of the first virtual object. This is the spatial relationship. Here is how this claim term is mapped below: a spatial relationship the first special effect is to display a “converged” lighting effect on a half-length portrait of the first virtual object a target control part controlled by a user and displayed in the virtual reality space the first virtual object the preset first special effect element the first special effect is to display a “converged” lighting effect. a spatial relationship meets a preset condition on a half-length portrait of the first virtual object Second special effect display a ribbon lighting effect similar to “galaxy” on a background region of virtual objects belonging to the camp, and the first special effect and the second special effect may be collectively referred to as a “backdrop” special effect.”. In Fig. 13, it is easy to see the “converged” lighting effect on the first virtual object is the preset condition that has to been met before the second special effect can be displayed. so that the “galaxy” effect can collectively together as a “backdrop” special effect. Otherwise, there is no collective “backdrop” effect. Applicant further involved a word “continuously” in the argument but this term is not in the claim. Even consider “continuously”, Cai can be incorporated to combine with Mei to teach feature, “Conditional Trigger Mechanism Based on Spatial Relationship Determination” in ¶0045-0047 of Cai. Applicant submits “Second, the specific limitations regarding "displaying a second special effect" are not disclosed In the present application, the "first special effect element" refers to one or a set of virtual objects within the scene, and the display of the "second special effect" involves changes applied to the object itself (such as adding objects or effects, or altering its form or parameters). In Mei, the lighting effect on a background region, which the Office Action equates to the "second special effect," is primarily rendered on the background region, not on the first virtual object. This indicates that the "first special effect" and the "second special effect" in Mei are parallel visual elements applied to different display subjects. This completely fails to meet the explicit limitations in the present application regarding the "second special effect" in relation to the "first special effect element," namely: "adding at least one of another virtual object or an animation special effect to the first special effect element, or changing at least one of a motion parameter, a gesture, a shape, a size, and a display parameter of the first special effect element."” The examiner disagrees with Applicant’s premises and conclusion. Mei, ¶0116, “the terminal displays a first special effect 1331 with a “converged” lighting effect on an identity picture of the virtual object 1307, and further displays a second special effect 1332 similar to “galaxy” and with a ribbon lighting effect on a background region of identity pictures of the virtual objects 1306 to 1310.” Fig. 14, ¶0127, “the social networking interaction option of special effects is provided, so that efficient social contact may be achieved between users” Fig. 19, ¶0191, “a first special effect 1931 is displayed on an identity picture of the virtual object 1902 in the object presentation interface 1900, and a second special effect 1932 is displayed in a background region of identity pictures of the virtual objects 1901 to 1905 in a camp to which the virtual object 1902 belongs.” ¶0224, “when likes are given, the originally displayed special effect is controlled to be enlarged, be increased in brightness, flicker for several seconds, and the like, that is, the original special effect of the first virtual object is converted from a first display form into a second display form.”. It is clear here Mei teaches “the displaying the second special effect associated with the preset first special effect element comprises at least one of: adding at least one of another virtual object or an animation special effect to the first special effect element, or changing at least one of a motion parameter, a gesture, a shape, a size, and a display parameter of the first special effect element.". Applicant needs to realize how broad the term “object” is. Applicant and the examiner may refer to different things as object. However, as long as some kind of effects on the originally displayed special effect, the claimed feature "displaying a second special effect" is taught. Applicant submits “the process described in Cai (e.g., the display of the "target object") does not constitute performing the aforementioned types of operations on a preset "first special effect element." It represents a transformation of form or identity. The process involves the pollution source (the original object) itself converting into an attackable "display target object" (a new object). This is not "adding a special effect" to or "changing a parameter" of the original object, but rather a replacement of the object.” The examiner disagrees with Applicant’s premises and conclusion. Cai, ¶0083, “the terminal uses the tornado effect to present the target pollution source existing when the distance between the virtual object and the target pollution source is greater than the distance threshold.” ¶0086, “During the conversation of the form, the special tornado effect gradually becomes transparent, and the target object gradually becomes clear.” ¶0105, “The terminal can also display a special purified region effect in the target purified region. The special purified region effect may be diffusing ripples, a halo, or haze, which is not limited in this embodiment of this application. These are clear have added some effects or change display parameter. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-2, 7-8, 10-11, 16-17, 19 are under 35 U.S.C. 103 as being unpatentable over Mei et al. (US Pub 2022/0379208 A1) in view of Cai (US Pub 2024/0273823 A1). As to claim 1, Mei discloses an information interaction method, comprising: displaying a preset first special effect element in a virtual reality space (¶0104, “the special effect is divided into a first special effect and a second special effect. The first special effect is a special effect of the first virtual object, that is to say, the first special effect refers to an individual special effect of the first virtual object.”), wherein the preset first special effect element comprises one or a set of virtual objects (Fig. 19, ¶0104, “the special effect is divided into a first special effect and a second special effect. The first special effect is a special effect of the first virtual object, that is to say, the first special effect refers to an individual special effect of the first virtual object.” ¶0116, “the terminal displays a first special effect 1331 with a “converged” lighting effect on an identity picture of the virtual object 1307, and further displays a second special effect 1332 similar to “galaxy” and with a ribbon lighting effect on a background region of identity pictures of the virtual objects”); determining whether a spatial relationship between a target control part controlled by a user and displayed in the virtual reality space and the preset first special effect element meets a preset condition (¶0105,” if description is made by using an example in which virtual objects are displayed in the object presentation interface in the form of identity pictures, the terminal may generate the first special effect on an identity picture of the first virtual object in the object presentation interface. In this implementation, the terminal can display the special effect on the identity picture of the first virtual object, so that other terminals participating in the target battle can clearly learn that the terminal corresponding to the first virtual object triggers the special effect, and man-machine interaction efficiency is relatively high. In some embodiments, the terminal determines a second virtual object from the plurality of virtual objects, where the second virtual object and the first virtual object belong to the same camp; and generates the second special effect in a background region in which identity pictures of the first virtual object and the second virtual object are located. For example, the first special effect is to display a “converged” lighting effect on a half-length portrait of the first virtual object, the second special effect is to display a ribbon lighting effect similar to “galaxy” on a background region of virtual objects belonging to the camp, and the first special effect and the second special effect may be collectively referred to as a “backdrop” special effect.”); and in response to determining that the spatial relationship meets the preset condition, displaying a second special effect associated with the preset first special effect element (¶0105, “if description is made by using an example in which virtual objects are displayed in the object presentation interface in the form of identity pictures, the terminal may generate the first special effect on an identity picture of the first virtual object in the object presentation interface. In this implementation, the terminal can display the special effect on the identity picture of the first virtual object, so that other terminals participating in the target battle can clearly learn that the terminal corresponding to the first virtual object triggers the special effect, and man-machine interaction efficiency is relatively high. In some embodiments, the terminal determines a second virtual object from the plurality of virtual objects, where the second virtual object and the first virtual object belong to the same camp; and generates the second special effect in a background region in which identity pictures of the first virtual object and the second virtual object are located. For example, the first special effect is to display a “converged” lighting effect on a half-length portrait of the first virtual object, the second special effect is to display a ribbon lighting effect similar to “galaxy” on a background region of virtual objects belonging to the camp, and the first special effect and the second special effect may be collectively referred to as a “backdrop” special effect.” ¶0106, Fig. 13, ¶0116, ¶0130), wherein the displaying the second special effect associated with the preset first special effect element comprises at least one of: adding at least one of another virtual object or an animation special effect to the first special effect element, or changing at least one of a motion parameter, a gesture, a shape, a size, and a display parameter of the first special effect element (¶0116, “the terminal displays a first special effect 1331 with a “converged” lighting effect on an identity picture of the virtual object 1307, and further displays a second special effect 1332 similar to “galaxy” and with a ribbon lighting effect on a background region of identity pictures of the virtual objects 1306 to 1310.” Fig. 14, ¶0127, “the social networking interaction option of special effects is provided, so that efficient social contact may be achieved between users” Fig. 19, ¶0191, “a first special effect 1931 is displayed on an identity picture of the virtual object 1902 in the object presentation interface 1900, and a second special effect 1932 is displayed in a background region of identity pictures of the virtual objects 1901 to 1905 in a camp to which the virtual object 1902 belongs.” ¶0224, “when likes are given, the originally displayed special effect is controlled to be enlarged, be increased in brightness, flicker for several seconds, and the like, that is, the original special effect of the first virtual object is converted from a first display form into a second display form.”). Assuming, arguendo, that Mei does not disclose "determining whether a spatial relationship between a target control part controlled by a user and displayed in the virtual reality space and the preset first special effect element meets a preset condition, and in response to determining that the spatial relationship meets the preset condition, displaying a second special effect associated with the preset first special effect element, wherein the displaying the second special effect associated with the preset first special effect element comprises at least one of: adding at least one of another virtual object or an animation special effect to the first special effect element, or changing at least one of a motion parameter, a gesture, a shape, a size, and a display parameter of the first special effect element." Cai teaches determining whether a spatial relationship between a target control part controlled by a user and displayed in the virtual reality space and the preset first special effect element meets a preset condition (Cai, Fig. 2-3, ¶0043, “during movement of the virtual object in a target polluted region of the plurality of polluted regions, a region impact value of a position of the virtual object based on a distance between the virtual object and a target pollution source in the target polluted region, the region impact value being configured for representing a degree of pollution caused by the target pollution source to the position of the virtual object.”) and in response to determining that the spatial relationship meets the preset condition, displaying a second special effect associated with the preset first special effect element (Cai, ¶0045, “During the movement of the virtual object in the target polluted region, the region impact value displayed in the terminal changes with a change of the distance between the virtual object and the target pollution source. In some embodiments, a smaller distance between the virtual object and the target pollution source leads to a higher degree of pollution caused by the target pollution source to the position of the virtual object, and thereby leads to a larger region impact value. A larger distance between the virtual object and the target pollution source leads to a lower degree of pollution caused by the target pollution source to the position of the virtual object, and thereby leads to a smaller region impact value. [0046] 203: The terminal displays a target object when the distance between the virtual object and the target pollution source is not greater than a distance threshold, the target object being obtained by converting the target pollution source.” ¶0047, “during the movement of the virtual object in the target polluted region, the terminal can display not only the region impact value but also the target object. The target object is obtained by converting the target pollution source. An occasion for converting the target pollution source to the target object is not limited in this embodiment of this application. For example, as the virtual object gradually approaches the target pollution source, the distance between the virtual object and the target pollution source gradually decreases. When the distance between the virtual object and the target pollution source does not exceed the distance threshold, the target pollution source may be converted to the target object.”), wherein the displaying the second special effect associated with the preset first special effect element comprises at least one of: adding at least one of another virtual object or an animation special effect to the first special effect element, or changing at least one of a motion parameter, a gesture, a shape, a size, and a display parameter of the first special effect element (Cai, ¶0082, “the terminal can display the target pollution source through a special effect. The special effect may be tornado, typhoon, or fog”. Fig. 6, ¶0083, “the terminal uses the tornado effect to present the target pollution source existing when the distance between the virtual object and the target pollution source is greater than the distance threshold.” ¶0086, “During the conversation of the form, the special tornado effect gradually becomes transparent, and the target object gradually becomes clear.” ¶0104, “when the virtual health value of the target object is less than the defeat threshold, the terminal displays a special explosion effect at the position of the target object, to replace the target object previously displayed. Through the special explosion effect, the terminal indicates that the virtual scene no longer includes the target object and the target polluted region.” ¶0105, “The terminal can also display a special purified region effect in the target purified region. The special purified region effect may be diffusing ripples, a halo, or haze, which is not limited in this embodiment of this application.”). Mei and Cai are considered to be analogous art because all pertain to virtual object. It would have been obvious before the effective filing date of the claimed invention to have modified Mei with the features of “determining whether a spatial relationship between a target control part controlled by a user and displayed in the virtual reality space and the preset first special effect element meets a preset condition, and in response to determining that the spatial relationship meets the preset condition, displaying a second special effect associated with the preset first special effect element, wherein the displaying the second special effect associated with the preset first special effect element comprises at least one of: adding at least one of another virtual object or an animation special effect to the first special effect element, or changing at least one of a motion parameter, a gesture, a shape, a size, and a display parameter of the first special effect element” as taught by Cai. All the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art at the time of the invention. As to claim 2, claim 1 is incorporated and the combination of Mei and Cai discloses the displaying the preset first special effect element in the virtual reality space, comprises: displaying the preset first special effect element in the virtual reality space according to a set first special effect (Mei, ¶0105, “the first special effect is to display a “converged” lighting effect on a half-length portrait of the first virtual object, the second special effect is to display a ribbon lighting effect similar to “galaxy” on a background region of virtual objects belonging to the camp”). As to claim 7, claim 1 is incorporated and the combination of Mei and Cai discloses setting an special effect script file for displaying the second special effect in advance for the first special effect element (Mei, ¶0097, “a special effect file corresponding to the special effect based on the special effect generating instruction” “the special effect file corresponding to the special effect is a file made in advance, and by invoking the special effect file, the terminal can generate the corresponding special effect in the interface.” ¶0099, “The terminal loads a special effect file corresponding to the effect ID based on the effect ID of the special effect.”); the displaying the second special effect associated with the first special effect element comprises: rendering the second special effect in the virtual reality space based on the special effect script file (Mei, ¶0097, “The terminal generates the special effect in the object presentation interface based on the generating location and the generating time of the special effect and the special effect file corresponding to the special effect.” ¶0098, “The terminal determines a rendering parameter of the special effect based on the file corresponding to the special effect and the generating location of the special effect. The terminal performs rendering in the object presentation interface based on the rendering parameter, to generate the special effect.”). As to claim 8, claim 1 is incorporated and the combination of Mei and Cai discloses the determining whether the spatial relationship between the target control part controlled by the user and displayed in the virtual reality space and the preset first special effect element meets the preset condition, comprises: periodically monitoring whether the spatial relationship meets the preset condition at a preset time interval (Mei, ¶0114, “or the target duration is carried in the special effect generating instruction, so that the server may designate a same or different target duration for a special effect triggered each time, and therefore the target duration for which the special effect is continuously being displayed can be dynamically adjusted.” ¶0147, “When the server receives a first special effect triggering request in the current round of special effect contest process, a target time period may be determined, where the target time period takes a receiving moment of the first special effect triggering request as a start moment and a moment with a target time interval after the start moment as an end moment, and then the at least one special effect triggering request received in the target time period is obtained. The target time interval is any value greater than 0. For example, the target time interval is 0.3 seconds, and the target time interval may be dynamically configured by a technician or may be set to a default value.” ¶0149, “the server may receive a special effect triggering request only in the target time period.” ¶0155, “when the server receives a plurality of special effect triggering requests in the target time period, the first virtual object meeting the target condition is screened through a decision algorithm. In this embodiment of this application, description is made by using an example in which there is one first virtual object, and the decision algorithm may include: determining, by the server, a target camp to which a historical virtual object triggering the special effect last time belongs; obtaining at least one historical triggering situation of the at least one virtual object for the special effect; and determining the first virtual object meeting the target condition based on the at least one historical triggering situation and the target camp.” ¶0168, “where a time interval between the two backdrop contest requests is less than a target time interval (for example, 0.3 seconds), the server makes a decision, by using a decision algorithm, to determine one or more first virtual objects finally grabbing a backdrop”). As to claim 10, the combination of Mei and Cai discloses an electronic device, comprising: at least one memory and at least one processor; wherein the memory is configured to store program code, and the processor is configured to call the program code stored in the memory to cause the electronic device to perform the steps of: displaying a preset first special effect element in a virtual reality space, wherein the presetfirst special effect element comprises one or a set of virtual objects; determining whether a spatial relationship between a target control part controlled by a user and displayed in the virtual reality space and the preset first special effect element meets a preset condition; and in response to determining that the spatial relationship meets the preset condition, displaying a second special effect associated with the preset first special effect element, wherein the displaying the second special effect associated with the preset first special effect element comprises at least one of: adding at least one of another virtual object or an animation special effect to the first special effect element, or changing at least one of a motion parameter, a gesture, a shape, a size, and a display parameter of the first special effect element (See claim 1 for detailed analysis.). As to claim 11, claim 10 is incorporated and the combination of Mei and Cai discloses the step of displaying the preset first special effect element in the virtual reality space, comprises: displaying the preset first special effect element in the virtual reality space according to a set first special effect (See claim 2 for detailed analysis.). As to claim 16, claim 10 is incorporated and the combination of Mei and Cai discloses comprising the step of: setting an special effect script file for displaying the second special effect in advance for the preset first special effect element; the step of displaying the second special effect associated with the preset first special effect element comprises: rendering the second special effect in the virtual reality space based on the special effect script file (See claim 7 for detailed analysis.). As to claim 17, claim 10 is incorporated and the combination of Mei and Cai discloses the step of determining whether the spatial relationship between the target control part controlled by the user and displayed in the virtual reality space and the preset first special effect element meets the preset condition, comprises: periodically monitoring whether the spatial relationship meets the preset condition at a preset time interval (See claim 8 for detailed analysis.). As to claim 19, the combination of Mei and Cai discloses a non-transitory computer storage medium, storing program code, which, when executed by a computer device, causes the computer device to perform the steps of: displaying a preset first special effect element in a virtual reality space, wherein the preset first special effect element comprises one or a set of virtual objects; determining whether a spatial relationship between a target control part controlled by a user and displayed in the virtual reality space and the preset first special effect element meets a preset condition; and in response to determining that the spatial relationship meets the preset condition, displaying a second special effect associated with the preset first special effect element, wherein the displaying the second special effect associated with the preset first special effect element comprises at least one of: adding at least one of another virtual object or an animation special effect to the first special effect element, or changing at least one of a motion parameter, a gesture, a shape, a size, and a display parameter of the first special effect element. (See claim 1 for detailed analysis.). Claims 3-4, 6, 12-13, 15 are rejected under 35 U.S.C. 103 as being unpatentable over Mei et al. (US Pub 2022/0379208 A1) in view of Cai (US Pub 2024/0273823 A1) and Hou et al. (US Pub 2021/0110617 A1). As to claim 3, claim 1 is incorporated and Mei does not disclose moving the preset first special effect element in the virtual reality space along a preset path. Hou teaches moving the preset first special effect element in the virtual reality space along a preset path (Hou, ¶0075, “The AR device or the server may set a path planning algorithm to generate the moving path of the virtual object based on the movement position of the virtual object and the position data of the AR device. Then, the AR device, after acquiring the locally generated moving path of the virtual object or acquiring the moving path, generated by the server, of the virtual object, may fuse the special effect data of the virtual object in the three-dimensional scene model matched with the reality scene and the acquired moving path to determine the presentation data including the moving state of the virtual object.”). Mei and Hou are considered to be analogous art because all pertain to virtual object. It would have been obvious before the effective filing date of the claimed invention to have modified Mei with the features of “moving the preset first special effect element in the virtual reality space along a preset path” as taught by Hou. All the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art at the time of the invention. As to claim 4, claim 1 is incorporated and Mei does not disclose determining a position of an avatar corresponding to the user in the virtual reality space; and moving the preset first special effect element towards the position of the avatar. Hou teaches determining a position of an avatar corresponding to the user in the virtual reality space; and moving the preset first special effect element towards the position of the avatar (Hou, ¶0075, “The AR device or the server may set a path planning algorithm to generate the moving path of the virtual object based on the movement position of the virtual object and the position data of the AR device. Then, the AR device, after acquiring the locally generated moving path of the virtual object or acquiring the moving path, generated by the server, of the virtual object, may fuse the special effect data of the virtual object in the three-dimensional scene model matched with the reality scene and the acquired moving path to determine the presentation data including the moving state of the virtual object.” ¶0084, “movement positions of different virtual objects may be the same or may be different, and the movement position of the virtual object may be set according to a practical requirement. After the matched virtual object is determined based on the attribute information of the user associated with the AR device, the movement position of the virtual object matched with the attribute information is acquired, then a moving path of the virtual object matched with the attribute information is generated based on the acquired movement position of the virtual object and the position data of the AR device, and finally, the presentation data including the moving state of the virtual object matched with the attribute information is determined based on the moving path and special effect data of the virtual object matched with the attribute information in the three-dimensional scene model matched with the reality scene.”). Mei and Hou are considered to be analogous art because all pertain to virtual object. It would have been obvious before the effective filing date of the claimed invention to have modified Mei with the features of “determining a position of an avatar corresponding to the user in the virtual reality space; and moving the preset first special effect element towards the position of the avatar” as taught by Hou. All the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art at the time of the invention. As to claim 6, claim 3 is incorporated and the combination of Mei and Hou disclose different special effect elements are associated with different preset paths (Hou, ¶0074, “the presentation data including the moving state of the virtual object is determined based on the moving path and special effect data of the virtual object in a three-dimensional scene model matched with a reality scene.”. ¶0084, “movement positions of different virtual objects may be the same or may be different, and the movement position of the virtual object may be set according to a practical requirement.” ¶0088, “special effect of the virtual object 33 at the position point 31 is different from special effect data at the position point 32” Different objects may have different path and different positions have different effects. Thus, it is obvious to teach the claim.). As to claim 12, claim 10 is incorporated and the combination of Mei and Hou disclose the step of displaying the preset first special effect element in the virtual reality space, comprises: moving the preset first special effect element in the virtual reality space along a preset path (See claim 3 for detailed analysis.). As to claim 13, claim 10 is incorporated and the combination of Mei and Hou disclose the step of displaying the preset first special effect element in the virtual reality space, comprises: determining a position of an avatar corresponding to the user in the virtual reality space; and moving the preset first special effect element towards the position of the avatar (See claim 4 for detailed analysis.). As to claim 15, claim 12 is incorporated and the combination of Mei and Hou disclose different special effect elements are associated with different preset paths (See claim 6 for detailed analysis.). Claims 5 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Mei et al. (US Pub 2022/0379208 A1) in view of Hou et al. (US Pub 2021/0110617 A1), Cai (US Pub 2024/0273823 A1) and Maejima et al. (US Pub 2017/0072311 A1). As to claim 5, claim 1 is incorporated and Mei does not disclose randomly determining a movement end point of the first special effect element in the virtual reality space; and moving the preset first special effect element towards the movement end point. Hou teaches determining a movement end point of the preset first special effect element in the virtual reality space; and moving the first special effect element towards the movement end point (Hou, ¶0054, “FIG. 2A includes a preset initial position 21 of the virtual object and a preset end position 22 of the virtual object. In response to detecting that the AR device meets the preset presentation condition for triggering virtual object presentation, the new moving path of the virtual object may be generated based on the position data of the AR device. There are multiple situations for the new moving path of the virtual object. Exemplarily, FIG. 2B shows one situation. It can be seen from FIG. 2B that the new moving path of the virtual object includes the preset initial position 21, the preset end position 22 of the virtual object and a position 23 of the AR device.” ¶0057, “a moving path that starts from the present real-time position or preset initial position of the virtual object, passes by the position of the AR device and ends at the preset end position of the virtual object is generated” ¶0084, “the presentation data including the moving state of the virtual object matched with the attribute information is determined based on the moving path and special effect data of the virtual object matched with the attribute information in the three-dimensional scene model matched with the reality scene.”). Mei and Hou are considered to be analogous art because all pertain to virtual object. It would have been obvious before the effective filing date of the claimed invention to have modified Mei with the features of “determining a movement end point of the first special effect element in the virtual reality space; and moving the preset first special effect element towards the movement end point” as taught by Hou. All the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art at the time of the invention. The combination of Mei and Hou does not disclose randomly determining. Maejima teaches randomly determining (Maejima, ¶0199, “a position determined at random”). Mei and Hou are considered to be analogous art because all pertain to virtual object. It would have been obvious before the effective filing date of the claimed invention to have modified Mei with the features of “randomly determining” as taught by Maejima. The claim would have been obvious because “a person of ordinary skill has good reason to pursue the known options within his or her technical grasp. If this leads to the anticipated success, it is likely the product not of innovation but of ordinary skill and common sense.” As to claim 14, claim 10 is incorporated and the combination of Mei, Hou and Maejima disclose the step of displaying the preset first special effect element in the virtual reality space, comprises: randomly determining a movement end point of the preset first special effect element in the virtual reality space; and moving the preset first special effect element towards the movement end point (See claim 5 for detailed analysis.). Claims 9 and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Mei et al. (US Pub 2022/0379208 A1) in view of Cai (US Pub 2024/0273823 A1). As to claim 9, claim 1 is incorporated and Mei does not disclose if a distance between the target control part and the preset first special effect element does not exceed a preset threshold, it is determined that the spatial relationship meets the preset condition. Cai teaches if a distance between the target control part and the preset first special effect element does not exceed a preset threshold, it is determined that the spatial relationship meets the preset condition (Cai, ¶0082, “the terminal can display the target pollution source through a special effect. The special effect may be tornado, typhoon, or fog, which is not limited in this embodiment of this application. It may be learned that, when the distance between the virtual object and the target pollution source is greater than the distance threshold, the target pollution source is a non-entity.”). Mei and Cai are considered to be analogous art because all pertain to virtual object. It would have been obvious before the effective filing date of the claimed invention to have modified Mei with the features of “if a distance between the target control part and the preset first special effect element does not exceed a preset threshold, it is determined that the spatial relationship meets the preset condition” as taught by Cai. All the claimed elements were known in the prior art and one skilled in the art could have combined the elements as claimed by known methods with no change in their respective functions, and the combination would have yielded predictable results to one of ordinary skill in the art at the time of the invention. As to claim 18, claim 10 is incorporated and the combination of Mei and Cai disclose if a distance between the target control part and the preset first special effect element does not exceed a preset threshold, it is determined that the spatial relationship meets the preset condition. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to YU CHEN whose telephone number is (571)270-7951. The examiner can normally be reached on M-F 8-5 PST Mid-day flex. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Xiao Wu can be reached on 571-272-7761. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YU CHEN/Primary Examiner, Art Unit 2613
Read full office action

Prosecution Timeline

Sep 01, 2023
Application Filed
Apr 26, 2025
Non-Final Rejection — §103
Aug 01, 2025
Response Filed
Sep 02, 2025
Final Rejection — §103
Nov 04, 2025
Request for Continued Examination
Nov 13, 2025
Response after Non-Final Action
Feb 12, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12604497
THIN FILM TRANSISTOR AND ARRAY SUBSTRATE
2y 5m to grant Granted Apr 14, 2026
Patent 12597176
IMAGE GENERATOR AND METHOD OF IMAGE GENERATION
2y 5m to grant Granted Apr 07, 2026
Patent 12589481
TOOL ATTRIBUTE MANAGEMENT IN AUTOMATED TOOL CONTROL SYSTEMS
2y 5m to grant Granted Mar 31, 2026
Patent 12588347
DISPLAY DEVICE
2y 5m to grant Granted Mar 24, 2026
Patent 12586265
LINE DRAWING METHOD, LINE DRAWING APPARATUS, ELECTRONIC DEVICE, AND COMPUTER READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
98%
With Interview (+29.9%)
2y 10m
Median Time to Grant
High
PTA Risk
Based on 1052 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month