Prosecution Insights
Last updated: April 19, 2026
Application No. 18/747,290

VIRTUAL OBJECT DISPLAY METHOD AND APPARATUS, TERMINAL DEVICE, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Jun 18, 2024
Examiner
LE, MICHAEL
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Tencent Technology (Shenzhen) Company Limited
OA Round
1 (Non-Final)
66%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
88%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
568 granted / 864 resolved
+3.7% vs TC avg
Strong +22% interview lift
Without
With
+22.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
61 currently pending
Career history
925
Total Applications
across all art units

Statute-Specific Performance

§101
12.4%
-27.6% vs TC avg
§103
52.7%
+12.7% vs TC avg
§102
13.4%
-26.6% vs TC avg
§112
15.9%
-24.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 864 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status 1. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Information Disclosure Statement 2. The information disclosure statements (IDS) submitted on the following dates are in compliance with the provisions of 37 CFR 1.97 and are being considered by the Examiner: 07/03/2024; 05/02/2025; 12/04/2025. Claim Rejections - 35 USC § 103 3. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. 4. Claims 1, 4, 7-9, 12, 15-17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al., (“Lu”) [CN-111862333-A] in view of Takayama et al., (“Takayama”) [US-2018/0357831-A1] Regarding claim 1, Lu discloses a virtual object display method performed by a computer device (Lu- ¶0007, at least discloses an augmented reality-based content processing method applied to a terminal device; Fig. 1 and ¶0030, at least disclose The augmented reality-based content processing system 10 includes: a terminal device 100, an interactive device 200, and a three-dimensional object 300.), the method comprising: determining an interception plane of a three-dimensional model of a first virtual object (Lu- Fig. 1 and ¶0030, at least disclose The terminal device 100 is used to determine the virtual cutting plane 401 [an interception plane] based on the spatial position information of the interactive device 200, determine the internal virtual section content corresponding to the three-dimensional object 100 based on the virtual cutting plane 401 [determining an interception plane of a three-dimensional model of a first virtual object], and control the display of the internal virtual section content according to the control instructions generated by the interactive device 200, so as to allow users to view the internal structure of the three-dimensional object 300 through augmented reality; ¶0037-0037, at least disclose after the terminal device 100 determines the spatial position of the virtual cutting plane 401, the virtual cutting plane 401 can be displayed according to the spatial position. The user can see the virtual cutting plane 401 superimposed on the interactive device 200 through the terminal device 100 […] The terminal device 100 can obtain the cutting information of the virtual cutting plane 401 for the three-dimensional object 300 according to the spatial position of the virtual cutting plane 401, and thus can obtain and display the internal virtual cutting content 403 of the three-dimensional object 300 according to the cutting information); acquiring posture information of the three-dimensional modelspatial posture of the physical object, thereby changing the display angle of the internal virtual section content; ¶0144, at least discloses The physical object monitoring module 580 is used to acquire images containing physical objects, identify physical objects in the images, and determine the posture information of physical objects. Based on the posture information of physical objects, it determines the display angle of the internal virtual cross section content corresponding to the posture information); intercepting the three-dimensional model by using the interception plane based on the posture information to obtain an interception result (Lu- Fig. 2 and ¶0110, at least disclose when the three-dimensional object is a simulated heart model 300 [three-dimensional model], the terminal device 100 determines the virtual cutting plane 401 for cutting the simulated heart model 300 [intercepting the three-dimensional model by using the interception plane] based on the 6DoF information of the interactive device 200. After the simulated heart model 300 is cut by the virtual cutting plane 401, its corresponding virtual cutting content can include a first stereo model 4031 and a second stereo model 4033 [interception result]. The first stereo model 4031 is located on the side of the cutting plane 401 closer to the terminal device 100, and the second stereo model 4033 is located on the side of the cutting plane 401 away from the cutting plane, so that the user can observe the cross-section of the three-dimensional object 300 after being cut on the second stereo model 4033 from the perspective of the terminal device 100; ¶0144, at least discloses Based on the posture information of physical objects, it determines the display angle of the internal virtual cross section content corresponding to the posture information), wherein the interception result comprises a partial model of the three-dimensional model located on a first side of the interception plane (Lu- Fig. 2 and ¶0110, at least disclose After the simulated heart model 300 is cut by the virtual cutting plane 401, its corresponding virtual cutting content can include a first stereo model 4031 and a second stereo model 4033. The first stereo model 4031 is located on the side of the cutting plane 401 closer to the terminal device 100 [a partial model of the three-dimensional model located on a first side of the interception plane], and the second stereo model 4033 is located on the side of the cutting plane 401 away from the cutting plane, so that the user can observe the cross-section of the three-dimensional object 300 after being cut on the second stereo model 4033 from the perspective of the terminal device 100); and displaying the partial model on the first side (Lu- ¶0008, at least discloses The display module is used to display the internal virtual cross-section content; ¶0030-0031, at least disclose The terminal device 100 is used to determine a virtual cutting plane 401 according to the spatial position information of the interactive device 200 […] the terminal device 100 may be a head-mounted display device, or a mobile device such as a mobile phone or a tablet; ¶0067, at least discloses The method for processing internal virtual section content […] determines the spatial position of the virtual cutting plane based on the 6DoF information of the interactive device, and obtains and displays the internal virtual section content of the three-dimensional object based on the virtual cutting plane; Fig. 2 and ¶0110-0113, at least disclose After the simulated heart model 300 is cut by the virtual cutting plane 401, its corresponding virtual cutting content can include a first stereo model 4031 and a second stereo model 4033. The first stereo model 4031 is located on the side of the cutting plane 401 closer to the terminal device 100 [the partial model on the first side], […] so that the user can observe the cross-section of the threedimensional object 300 after being cut on the second stereo model 4033 from the perspective of the terminal device 100 […] After the terminal device obtains the internal virtual section content, it can display the internal virtual section content […] the internal virtual section content is a three-dimensional model formed by cutting a three-dimensional object with a virtual cutting plane. That is, the internal virtual section content includes a first virtual section content and a second virtual section content […] displaying the first virtual section content according to the first display parameter, and displaying the second virtual section content according to the second display parameter). Lu does not explicitly disclose acquiring posture information of the three-dimensional model once after a preset time period. However, Takayama discloses acquiring posture information of the three-dimensional model once after a preset time period (Takayama- Fig. 15A-15C shows generating an object 1580 in a virtual space. A player character 1510 that can move (including posture information) based on user input on a surface 1520 in a virtual space; ¶0042-0051, at least disclose process for generating an object 1580 in a virtual space […] FIG. 15A shows a player character 1510 movable in accordance with user inputs on a surface 1520 in a virtual space […] In FIG. 15B, the indicator 1570 is displayed with a two-dimensional shape (e.g., a square) and a three-dimensional shape (e.g., a pillar) corresponding to the shape of the object 1580 to be generated […] FIG. 15C illustrates the generated object 1580 in the virtual space. The object 1580 may be generated, for example, after the user positions the indicator 1570 at a particular location and when a specific condition is satisfied. For example, the specific condition may be a specific user input. The object 1580 may be progressively generated from the base in the base direction. For example, the object may appear from the base at the surface 1520 and progressively extend in the base direction until the object 1580 reaches a specific height set for the object 1580 […] After the object 1580 is generated, subsequent objects may be generated with the same process at other locations in the virtual space. The object 1580 may remain in the virtual space until a specific condition is satisfied. After the specific condition is satisfied, the generated object may be removed from the virtual space. The specific condition may, for example, be a predetermined amount of time [after a preset time period], user instructions to remove the generated object, a subsequent object is generated after a user-generated object limit for the generating object is reached). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu to incorporate the teachings of Takayama, and apply the posture information of the three-dimensional model that can move based on user input into Lu’s teachings for acquiring posture information of the three-dimensional model once after a preset time period. Doing so would provide an efficient manner to generate objects in the virtual space. Regarding claim 4, Lu in view of Takayama, discloses the method according to claim 1, and further discloses wherein the interception result (see Claim 1 rejection for detailed analysis) further comprises a partial model of the three-dimensional model located on a second side of the interception plane (Lu- Fig. 2 and ¶0110, at least disclose After the simulated heart model 300 is cut by the virtual cutting plane 401, its corresponding virtual cutting content can include a first stereo model 4031 and a second stereo model 4033. The first stereo model 4031 is located on the side of the cutting plane 401 closer to the terminal device 100, and the second stereo model 4033 [a second side] is located on the side of the cutting plane 401 away from the cutting plane, so that the user can observe the cross-section of the three-dimensional object 300 after being cut on the second stereo model 4033 from the perspective of the terminal device 100), and the first side and the second side are respectively two sides of the interception plane (Lu- Fig. 2 and ¶0110, at least disclose After the simulated heart model 300 is cut by the virtual cutting plane 401, its corresponding virtual cutting content can include a first stereo model 4031 [the first side] and a second stereo model 4033 [the second side]. The first stereo model 4031 is located on the side of the cutting plane 401 closer to the terminal device 100, and the second stereo model 4033 is located on the side of the cutting plane 401 away from the cutting plane, so that the user can observe the cross-section of the three-dimensional object 300 after being cut on the second stereo model 4033 from the perspective of the terminal device 100) ; and the displaying the partial model on the first side (see Claim 1 rejection for detailed analysis) comprises: rendering only the partial model on the first side (Lu- Fig. 2 and ¶0110-0113, at least disclose After the simulated heart model 300 is cut by the virtual cutting plane 401, its corresponding virtual cutting content can include a first stereo model 4031 and a second stereo model 4033. The first stereo model 4031 is located on the side of the cutting plane 401 closer to the terminal device 100 [the partial model on the first side], […] so that the user can observe the cross-section of the threedimensional object 300 after being cut on the second stereo model 4033 from the perspective of the terminal device 100 […] After the terminal device obtains the internal virtual section content, it can display the internal virtual section content […] the internal virtual section content is a three-dimensional model formed by cutting a three-dimensional object with a virtual cutting plane. That is, the internal virtual section content includes a first virtual section content and a second virtual section content […] displaying the first virtual section content according to the first display parameter, and displaying the second virtual section content according to the second display parameter); and displaying the rendered partial model on the first side (Lu- ¶0067, at least discloses The method for processing internal virtual section content […] determines the spatial position of the virtual cutting plane based on the 6DoF information of the interactive device, and obtains and displays the internal virtual section content of the three-dimensional object based on the virtual cutting plane; Fig. 2 and ¶0110-0113, at least disclose After the simulated heart model 300 is cut by the virtual cutting plane 401, its corresponding virtual cutting content can include a first stereo model 4031 and a second stereo model 4033. The first stereo model 4031 is located on the side of the cutting plane 401 closer to the terminal device 100 [the partial model on the first side], […] so that the user can observe the cross-section of the three-dimensional object 300 after being cut on the second stereo model 4033 from the perspective of the terminal device 100 […] After the terminal device obtains the internal virtual section content, it can display the internal virtual section content […] the internal virtual section content is a three-dimensional model formed by cutting a three-dimensional object with a virtual cutting plane. That is, the internal virtual section content includes a first virtual section content and a second virtual section content […] displaying the first virtual section content according to the first display parameter, and displaying the second virtual section content according to the second display parameter). Regarding claim 7, Lu in view of Takayama, discloses the method according to claim 1, and further discloses wherein the acquiring posture information of the three-dimensional model once after a preset time period (see Claim 1 rejection for detailed analysis) comprises: acquiring posture information of the first virtual object at a first position in a virtual environment once after a preset time period (Lu- ¶0129, at least discloses the user can control the rotation of the internal virtual section content by changing the spatial posture of the physical object, thereby changing the display angle of the internal virtual section content; ¶0144, at least discloses The physical object monitoring module 580 is used to acquire images containing physical objects, identify physical objects in the images, and determine the posture information of physical objects. Based on the posture information of physical objects, it determines the display angle of the internal virtual cross section content corresponding to the posture information; Takayama- Fig. 15A-15C shows generating an object 1580 in a virtual space. A player character 1510 that can move (including posture information) based on user input on a surface 1520 in a virtual space; ¶0042-0051, at least disclose process for generating an object 1580 in a virtual space […] FIG. 15A shows a player character 1510 movable in accordance with user inputs on a surface 1520 in a virtual space […] In FIG. 15B, the indicator 1570 is displayed with a two-dimensional shape (e.g., a square) and a three-dimensional shape (e.g., a pillar) corresponding to the shape of the object 1580 to be generated […] FIG. 15C illustrates the generated object 1580 in the virtual space. The object 1580 may be generated, for example, after the user positions the indicator 1570 at a particular location and when a specific condition is satisfied […] After the object 1580 is generated, subsequent objects may be generated with the same process at other locations in the virtual space. The object 1580 may remain in the virtual space until a specific condition is satisfied. After the specific condition is satisfied, the generated object may be removed from the virtual space. The specific condition may, for example, be a predetermined amount of time [after a preset time period], user instructions to remove the generated object, a subsequent object is generated after a user-generated object limit for the generating object is reached), and determining the posture information of the three-dimensional model based on a posture of the first virtual object at the first position (Lu- ¶0129, at least discloses the user can control the rotation of the internal virtual section content by changing the spatial posture of the physical object, thereby changing the display angle of the internal virtual section content; ¶0144, at least discloses Based on the posture information of physical objects, it determines the display angle of the internal virtual cross section content corresponding to the posture information; Takayama- Fig. 15A-15C shows generating an object 1580 in a virtual space. A player character 1510 that can move (including posture information) based on user input on a surface 1520 in a virtual space), wherein the partial model on the first side is displayed at a second position in the virtual environment, and the first position and the second position are two different positions in the virtual environment (Lu- Figs. 6-7 and ¶0084, at least disclose At this time, the user moves the interactive device 200 upward and at the same time modifies the diameter data of the annular contour on the interactive device 200. Then, the terminal device determines the spatial height dimension and the corresponding cross-sectional contour parameters of the three-dimensional virtual object 400 corresponding to the 6DoF information based on the realtime 6DoF information of the interactive device 200. The spatial height dimension is determined by the displacement of the interactive device 200. […] As shown in Figure 7, the interactive device 200 moves upward by a distance D in the vertical direction. At this time, the drawing section currently being edited by the user moves from the drawing reference plane 405 in Figure 6 to the current drawing section 406 […] the current spatial model height of the three-dimensional virtual object 400 is determined by the relative distance between the current section outline and the end outline. Thus, the terminal device can determine the spatial structure data of the three-dimensional virtual object 400 based on the real-time 6DoF information and its corresponding drawing commands, thereby generating a threedimensional model in the three dimensional virtual object that extends from the end outline to the section outline-> moving of the drawing reference plane 405 to the current drawing section 406 suggests a second position in the virtual environment; ¶0092, at least discloses The user moves the interactive device, and the terminal device determines the virtual cutting plane corresponding to the 6DoF information of the interactive device in real time based on the first relative positional relationship between the interactive device and the virtual cutting plane; ¶0099, at least discloses the virtual cutting plane can be freed from large-scale changes caused by the slight spatial movement of the interactive device. Therefore, the terminal device obtains the first set of spatial coordinates of the interactive device in the virtual space based on the 6DoF information of the interactive device, and then determines the second set of spatial coordinates of the virtual cutting plane in the virtual space based on the first relative position relationship, the first set of spatial coordinates of the interactive device, and the specified angle relationship between the reference axis and the virtual cutting plane). Regarding claim 8, Lu in view of Takayama, discloses the method according to claim 1, and further discloses wherein the determining an interception plane of a three-dimensional model of a first virtual object (see Claim 1 rejection for detailed analysis) comprises: determining the interception plane based on a battle situation of the first virtual object or a virtual environment where the first virtual object is located (Lu- Fig. 1 and ¶0030, at least disclose The terminal device 100 is used to determine the virtual cutting plane 401 [an interception plane] based on the spatial position information of the interactive device 200, determine the internal virtual section content corresponding to the three-dimensional object 100 based on the virtual cutting plane 401 [determining an interception plane of a three-dimensional model of a first virtual object], and control the display of the internal virtual section content according to the control instructions generated by the interactive device 200, so as to allow users to view the internal structure of the three-dimensional object 300 through augmented reality; ¶0037-0037, at least disclose after the terminal device 100 determines the spatial position of the virtual cutting plane 401, the virtual cutting plane 401 can be displayed according to the spatial position. The user can see the virtual cutting plane 401 superimposed on the interactive device 200 through the terminal device 100 […] The terminal device 100 can obtain the cutting information of the virtual cutting plane 401 for the three-dimensional object 300 according to the spatial position of the virtual cutting plane 401, and thus can obtain and display the internal virtual cutting content 403 of the three-dimensional object 300 according to the cutting information). The computer device of claims 9, 12 and 15-16 are similar in scope to the functions performed by the method of claims 1, 4 and 7-8 and therefore claims 9, 12 and 15-16 are rejected under the same rationale. Regarding claim 9, Lu in view of Takayama, discloses a computer device (Lu- Fig. 1 and ¶0030, at least disclose The augmented reality-based content processing system 10 includes: a terminal device 100, an interactive device 200), comprising a processor and a memory, wherein the memory has a computer program stored therein, and the computer program is loaded and executed by the processor to implement a virtual object display method (Lu- ¶0009, at least discloses a terminal device, including: one or more processors; a memory; and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, and the one or more applications are configured to execute the augmented reality-based content processing method; Fig. 11 and ¶0148, at least disclose The terminal device 100 in this application may include one or more of the following components: processor 110, memory 120, image acquisition device 130, and one or more application programs, wherein the one or more application programs may be stored in memory 120 and configured to be executed by one or more processors 110, and the one or more programs are configured to perform the methods as described in the foregoing method) including the method of claim 1. Regarding claims 17 and 20, all claim limitations are set forth as claims 1 and 7 in a non-transitory computer-readable storage medium, having a computer program stored therein and rejected as per discussion for claims 1 and 7. Regarding claim 17, Lu in view of Takayama, discloses a non-transitory computer-readable storage medium, having a computer program stored therein, wherein the computer program is loaded and executed by a processor of a computer device to implement a virtual object display method (Lu- ¶0009, at least discloses a terminal device, including: one or more processors; a memory; and one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, and the one or more applications are configured to execute the augmented reality-based content processing method; Fig. 11 and ¶0148, at least disclose The terminal device 100 in this application may include one or more of the following components: processor 110, memory 120, image acquisition device 130, and one or more application programs, wherein the one or more application programs may be stored in memory 120 and configured to be executed by one or more processors 110, and the one or more programs are configured to perform the methods as described in the foregoing method) including the method of claim 1. 5. Claims 2-3 and 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Lu in view of Takayama, and further in view of Mark A. Sullivan III, [hereinafter as Sullivan], “Soft Body Animation in Real-Time Simulations” Regarding claim 2, Lu in view of Takayama, discloses the method according to claim 1, and further discloses wherein the posture information comprises position information of the three-dimensional model (Lu- ¶0129, at least discloses the user can control the rotation of the internal virtual section content by changing the spatial posture of the physical object, thereby changing the display angle of the internal virtual section content); and the intercepting the three-dimensional model by using the interception plane based on the posture information to obtain an interception result (see Claim 1 rejection for detailed analysis) comprises: determining a distance between each point and the interception plane based on the position information of the point (Takayama- Figs. 15A-15C and ¶0048, at least disclose the indicator 1570 may include an object having a shape that corresponds to the object to be generated, a circle or sphere that expands from the base point in the base direction, a rectangle, a cube, a cylinder, rectangular prism, a cone, a pyramid, or a triangular prism. In an example embodiment, the indicator 1570 may be depicted as an arrow provided at the location of the base and pointing in the determined base direction. The arrow may be a static arrow depicting a direction in which the object will be generated (e.g., a base direction) and/or the height of the object after it is generated ); and selecting a point whose distance meets a first condition to construct the partial model on the first side (Takayama- Figs. 15A-15C and ¶0048-0050, at least disclose the indicator 1570 may include an object having a shape that corresponds to the object to be generated, a circle or sphere that expands from the base point in the base direction, a rectangle, a cube, a cylinder, rectangular prism, a cone, a pyramid, or a triangular prism. In an example embodiment, the indicator 1570 may be depicted as an arrow provided at the location of the base and pointing in the determined base direction […] the generated object 1580 in the virtual space. The object 1580 may be generated, for example, after the user positions the indicator 1570 at a particular location and when a specific condition is satisfied). The prior art does not explicitly disclose, but Sullivan discloses the method comprises: position information of each mesh point on a surface of the three-dimensional model (Sullivan- Fig 3-1 shows Soft body discretization. Given a mesh such as this octopus, top image, we are able to select some set of cells to serve as a discrete approximation, middle image. Finally, we take the corners of those cells and place particles there, bottom image; page 19, section 3.1 Soft Body Representation, 3rd paragraph, at least discloses Once there is a set of cells that represent the mesh, particles are placed at each vertex of these cells. Each cell is a cuboid, so it will border eight particles. Particles will be shared by adjacent cells; duplicate particles won't be created in a single location. The collection of all these particles is known as the lattice. This process can be seen in Figure 3-1; Fig. 3-2 shows Mesh reconstruction. After simulating, an updated set of particle positions and velocities is produced. The particles and (reconstructed from this data) cells are shown in the top figure; page 24, section 3.4 Soft Body Rendering, 2nd paragraph, at least discloses The process of reconstructing the mesh involves trilinear interpolation. At initialization, every mesh vertex lies inside of a cell. Based on this starting position, we can define a set of eight weights for each vertex, one corresponding to each of the particles enclosing its occupied cell; Fig. 3-3 shows 2. Extend the nodes outward to connect to the surface mesh. (That is, close the mesh.), 2a. Assign to each node a vertex of the surface mesh; page 26, section 3.6 Fracture Rendering, 2nd paragraph, at least discloses Vertices which were once shared between formerly adjacent triangles are duplicated and assigned to either side, so that those triangles can become completely independent. A closing interior surface is then generated on each side of the mesh; page 32, 3rd paragraph, at least discloses step allows users to interactively preview the appearance of a mesh given a particular particle configuration. The vertices of the preview mesh are positioned based on the same interpolation techniques used to determine the vertex positions of a mesh from particle positions in simulation. Each mesh vertex is initially contained within a rectangular cuboid of vertices representing particles); the position information of the mesh point (Sullivan- page 32, 3rd paragraph, at least discloses step allows users to interactively preview the appearance of a mesh given a particular particle configuration. The vertices of the preview mesh are positioned based on the same interpolation techniques used to determine the vertex positions of a mesh from particle positions in simulation. Each mesh vertex is initially contained within a rectangular cuboid of vertices representing particles); and selecting a mesh point (Sullivan- page 32, 3rd paragraph, at least discloses step allows users to interactively preview the appearance of a mesh given a particular particle configuration. The vertices of the preview mesh are positioned based on the same interpolation techniques used to determine the vertex positions of a mesh from particle positions in simulation. Each mesh vertex is initially contained within a rectangular cuboid of vertices representing particles). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Takayama to incorporate the teachings of Sullivan, and apply the vertex positions of a mesh into Lu/Takayama’s teachings in order the posture information comprises position information of each mesh point on a surface of the three-dimensional model; and determining a distance between each mesh point and the interception plane based on the position information of the mesh point; and selecting a mesh point whose distance meets a first condition to construct the partial model on the first side. Doing so would make soft body use more appealing. Regarding claim 3, Lu in view of Takayama, discloses the method according to claim 1, and further discloses wherein the posture information comprises pose information of the three-dimensional model (Lu- ¶0129, at least discloses the user can control the rotation of the internal virtual section content by changing the spatial posture of the physical object, thereby changing the display angle of the internal virtual section content); and the intercepting the three-dimensional model by using the interception plane based on the posture information to obtain an interception result (see Claim 1 rejection for detailed analysis) comprises: construct the partial model on the first side (see Claim 1 rejection for detailed analysis). The prior art does not explicitly disclose, but Sullivan discloses the method comprises: pose information of each bone of the three-dimensional model (Sullivan- page 15, section 2.3 Rigid Body Animation, 3rd paragraph, at least discloses If there is some model to be physically animated, it can be represented as a physical skeleton, which can interact with anything in the world. It's very amenable to the nonphysical skeletal representation of animation. So, one can use that same representation to define target transformations for all of the bones. An example of how this can be done can be seen in Figure 2-1. Tracking techniques such as those mentioned above can be used to let the bones track poses, and, using skeletal subspace deformation, the mesh will deform to reflect the changes based on the state of the underlying skeleton); determining a relative positional relationship between each bone and the interception plane based on the pose information of the bone (Sullivan- page 15, section 2.3 Rigid Body Animation, 3rd paragraph, at least discloses If there is some model to be physically animated, it can be represented as a physical skeleton, which can interact with anything in the world. It's very amenable to the nonphysical skeletal representation of animation. So, one can use that same representation to define target transformations for all of the bones. An example of how this can be done can be seen in Figure 2-1. Tracking techniques such as those mentioned above can be used to let the bones track poses, and, using skeletal subspace deformation, the mesh will deform to reflect the changes based on the state of the underlying skeleton); and selecting a bone whose relative positional relationship meets a second condition to construct the model on the first side (Sullivan- page 15, section 2.3 Rigid Body Animation, 3rd paragraph, at least discloses If there is some model to be physically animated, it can be represented as a physical skeleton, which can interact with anything in the world. It's very amenable to the nonphysical skeletal representation of animation. So, one can use that same representation to define target transformations for all of the bones. An example of how this can be done can be seen in Figure 2-1. Tracking techniques such as those mentioned above can be used to let the bones track poses, and, using skeletal subspace deformation, the mesh will deform to reflect the changes based on the state of the underlying skeleton). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Takayama to incorporate the teachings of Sullivan, and apply the target transformations for all of the bones into Lu/Takayama’s teachings in order the posture information comprises pose information of each bone of the three-dimensional model; and determining a relative positional relationship between each bone and the interception plane based on the pose information of the bone; and selecting a bone whose relative positional relationship meets a second condition to construct the partial model on the first side. Doing so would make soft body use more appealing. The computer device of claims 10-11 are similar in scope to the functions performed by the method of claims 2-3 and therefore claims 10-11 are rejected under the same rationale. 6. Claims 5-6, 13-14 and 18-19 are rejected under 35 U.S.C. 103 as being unpatentable over Lu et al., (“Lu”) [CN-111862333-A] in view of Takayama et al., (“Takayama”) [US-2018/0357831-A1], and further in view of Baba et al., (“Baba”) [US-2022/0323862-A1] Regarding claim 5, Lu in view of Takayama, discloses the method according to claim 1, and further discloses wherein the method further comprises: adjusting a display state of the three-dimensional model of the first virtual object display state of the 3D virtual object can be directly controlled to display the internal virtual cross-section content; ¶0141, at least discloses the internal virtual sectioning content includes an internal virtual sectioning surface, which is the cross-section formed inside the three-dimensional object when the virtual cutting plane sections the three-dimensional object […] modify the parameter data according to the touch command, and adjust the internal virtual sectioning surface according to the modified parameter data), wherein the display state comprises at least one of the following: information about the first virtual object correspondingly displayed during display of the interception result of the three-dimensional model (Lu- Fig. 2 and ¶0110, at least disclose After the simulated heart model 300 is cut by the virtual cutting plane 401, its corresponding virtual cutting content can include a first stereo model 4031 and a second stereo model 4033. The first stereo model 4031 is located on the side of the cutting plane 401 closer to the terminal device 100, and the second stereo model 4033 is located on the side of the cutting plane 401 away from the cutting plane, so that the user can observe the cross-section of the three-dimensional object 300 after being cut on the second stereo model 4033 from the perspective of the terminal device 100), and display duration of the interception result of the three-dimensional model (Lu- Fig. 2 and ¶0110, at least disclose After the simulated heart model 300 is cut by the virtual cutting plane 401, its corresponding virtual cutting content can include a first stereo model 4031 and a second stereo model 4033. The first stereo model 4031 is located on the side of the cutting plane 401 closer to the terminal device 100, and the second stereo model 4033 is located on the side of the cutting plane 401 away from the cutting plane, so that the user can observe the cross-section of the three-dimensional object 300 after being cut on the second stereo model 4033 from the perspective of the terminal device 100). The prior art does not explicitly disclose, but Baba discloses acquiring achievement information of at least one first virtual object and at least one other virtual object in a same battle (Baba- Figs. 11A-11D show the avatar object 610 fights against the enemy objects 671 in a same battle; ¶0191-0192, at least disclose The main game is a game in which the avatar object 610 who operates weapons, for example, guns and knives and a plurality of enemy objects 671 who is NPC appear in the virtual space 600 and the avatar object 610 fights against the enemy objects 671 […] A plurality of stages are prepared in the main game, and the player can clear the stage by establishing predetermined achievement conditions associated with each stage. Examples of the predetermined achievement conditions may include conditions established by defeating all the appearing enemy objects 671, defeating a boss object among the appearing enemy objects 671, acquiring a predetermined item, and reaching a predetermined position. The achievement conditions are defined in the game program 131. In the main game, the player clears the stage when the achievement conditions are established depending on the content of the game, in other words, a win of the avatar object 610 against the enemy objects 671 (win or loss between the avatar object 610 and the enemy object 671) is determine; ¶0231, at least discloses the user “AAAAA” supplies a magazine, for example, in a battle against a boss on a stage of a 10th floor and the avatar object 610 wins the boss with bullets of the supplied magazine); and adjusting a display state of the three-dimensional model of the first virtual object based on the achievement information (Baba- ¶0193-0195, at least disclose on the touch screen 15 of the user terminal 100 on which the user watches the game, a field-of-view image of the field-of-view area defined by the virtual camera 620B corresponding to the user terminal 100 is displayed […] on an upper right side and an upper left side of the field-of-view image, parameter images showing the physical strength of the avatar object 610, the number of usable magazines, the number of remaining bullets of the gun, and the number of remaining enemy objects 671 are displayed in a manner of being superimposed. The field-of-view image can also be expressed as a game screen […] The game information 132 includes data of various objects, for example, the avatar object 610, the enemy object 671, and the obstacle objects 672 and 673. The processor 10 uses the data and the analysis result of the game progress information to update the position, posture, and direction of each object. Thereby, the game progresses, and each object in the virtual space 600B moves in the same manner as each object in the virtual space 600A). It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Takayama to incorporate the teachings of Baba, and apply the win of the avatar object against the enemy objects into Lu/Takayama’s teachings for acquiring achievement information of at least one first virtual object and at least one other virtual object in a same battle; and adjusting a display state of the three-dimensional model of the first virtual object based on the achievement information. Doing so would provide a system for providing a game to a plurality of users. Regarding claim 6, Lu in view of Takayama, discloses the method according to claim 1, and further discloses wherein partial model (see Claim 1 rejection for detailed analysis) the method further comprises: adjusting a display layout corresponding to the partial model on the first side based on the abnormal control information (Lu- ¶0116, at least discloses By setting different parameters, users can display the virtual cutting plane in different states, such as hiding or showing the virtual cutting plane, to facilitate users' observation of the internal virtual cut content; ¶0141, at least discloses the internal virtual sectioning content includes an internal virtual sectioning surface, which is the cross-section formed inside the three-dimensional object when the virtual cutting plane sections the three-dimensional object […] modify the parameter data according to the touch command, and adjust the internal virtual sectioning surface according to the modified parameter data). The prior art does not explicitly disclose, but Baba discloses acquiring behavior information of the first virtual object during a battle (Baba- ¶0051, at least discloses The behavior instruction data is data for reproducing a moving image on the user terminal 100, and specifically, is data for producing behaviors of characters appearing in the moving image; ¶0094, at least discloses The game play terminal 300 produces the behavior of a character to be operated by the player, on the basis of various types of information acquired from the respective units of the HMD 500, the controller 540, and the motion sensor 520, and controls the progress of the game. The “behavior” herein includes moving respective parts of the body, changing postures, changing facial expressions, moving, speaking, touching and moving the object arranged in the virtual space, and using weapons and tools gripped by the character […] at least some of the character's behaviors may be executed in response to an input to the controller 540 from the player), the behavior information comprising control information, for the first virtual object, of a controller corresponding to the first virtual object and a corresponding control effect (Baba- ¶0094, at least discloses The game play terminal 300 produces the behavior of a character to be operated by the player, on the basis of various types of information acquired from the respective units of the HMD 500, the controller 540, and the motion sensor 520, and controls the progress of the game. The “behavior” herein includes moving respective parts of the body, changing postures, changing facial expressions, moving, speaking, touching and moving the object arranged in the virtual space, and using weapons and tools gripped by the character […] at least some of the character's behaviors may be executed in response to an input to the controller 540 from the player); analyzing the behavior information of the first virtual object during the battle to obtain a behavior analysis result of the first virtual object during the battle (Baba- ¶0051-0052, at least discloses The behavior instruction data is data for reproducing a moving image on the user terminal 100, and specifically, is data for producing behaviors of characters appearing in the moving image […] the moving image reproduced on the user terminal 100 based on the behavior instruction data is a moving image in which the characters operated by the player in the game behave. The “behavior” is to move at least a part of a character's body, and also includes a speech […] sound data for controlling the character to speak and motion data for moving the character's body; ¶0094, at least discloses The game play terminal 300 produces the behavior of a character to be operated by the player, on the basis of various types of information acquired from the respective units of the HMD 500, the controller 540, and the motion sensor 520, and controls the progress of the game. The “behavior” herein includes moving respective parts of the body, changing postures, changing facial expressions, moving, speaking, touching and moving the object arranged in the virtual space, and using weapons and tools gripped by the character), the analysis result comprising abnormal control information of the controller for the first virtual object (Baba- ¶0094, at least discloses The game play terminal 300 produces the behavior of a character to be operated by the player, on the basis of various types of information acquired from the respective units of the HMD 500, the controller 540, and the motion sensor 520, and controls the progress of the game. The “behavior” herein includes moving respective parts of the body, changing postures, changing facial expressions, moving, speaking, touching and moving the object arranged in the virtual space, and using weapons and tools gripped by the character […] at least some of the character's behaviors may be executed in response to an input to the controller 540 from the player; ¶0118, at least discloses The interactive device can also detect different control operation parameters (such as touch position parameters and touch count parameters) on the interactive device and send different operation commands); It would have been obvious to one of ordinary in the art before the effective filing date of the claimed invention to have modified Lu/Takayama to incorporate the teachings of Baba, and apply the behavior of a character into Lu/Takayama’s teachings for acquiring behavior information of the first virtual object during a battle, the behavior information comprising control information, for the first virtual object, of a controller corresponding to the first virtual object and a corresponding control effect; analyzing the behavior information of the first virtual object during the battle to obtain a behavior analysis result of the first virtual object during the battle, the analysis result comprising abnormal control information of the controller for the first virtual object. Doing so would provide a system for providing a game to a plurality of users. The computer device of claims 13-14 are similar in scope to the functions performed by the method of claims 5-6 and therefore claims 13-14 are rejected under the same rationale. Regarding claims 18-19, all claim limitations are set forth as claims 5-6 in a non-transitory computer-readable storage medium, having a computer program stored therein and rejected as per discussion for claims 5-6. Conclusion 7. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. They are as recited in the attached PTO-892 form. 8. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHAEL LE whose telephone number is (571)272-5330. The examiner can normally be reached 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached at (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHAEL LE/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Jun 18, 2024
Application Filed
Jan 09, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579211
AUTOMATED SHIFTING OF WEB PAGES BETWEEN DIFFERENT USER DEVICES
2y 5m to grant Granted Mar 17, 2026
Patent 12579738
INFORMATION PRESENTING METHOD, SYSTEM THEREOF, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12579072
GRAPHICS PROCESSOR REGISTER FILE INCLUDING A LOW ENERGY PORTION AND A HIGH CAPACITY PORTION
2y 5m to grant Granted Mar 17, 2026
Patent 12573094
COMPRESSION AND DECOMPRESSION OF SUB-PRIMITIVE PRESENCE INDICATIONS FOR USE IN A RENDERING SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12558788
SYSTEM AND METHOD FOR REAL-TIME ANIMATION INTERACTIVE EDITING
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
66%
Grant Probability
88%
With Interview (+22.1%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 864 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month