Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 5-6, 9-11, 13-14 and 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu et al. (US 11861775 B2; hereinafter Zhu) in view of Iyer et al. (US 20210074064 A1; hereinafter Iyer) in further view of Lengyel et al. (Jed Lengyel et al., Rendering With Coherent Layers, August 1997, Proceedings of the 24th annual conference on Computer graphics and interactive techniques, pages 233-242; hereinafter Lengyel).
Regarding claim 9, Zhu teaches A computer device comprising a processor and a memory, the memory storing at least one program, and the at least one program, when executed by the processor, causing the computer device to implement a picture rendering method ("In a design, the electronic device may include one or more memories. The memory stores instructions or intermediate data. The instructions may be run on the processor, so that the electronic device performs the method described in the foregoing method embodiments. In some embodiments, the memory may further store other related data. The processor and the memory may be separately disposed, or may be integrated together," (col 10, lines 59-67; col 11, lines 1-8). The disclosed instructions read on at least one program.) including:
determining object changed content of a first rendering object from an ith frame of interaction picture to an (i+1)th frame of interaction picture "Step 201: Obtain first picture data of a current frame. Step 202: Compare the first picture data with currently recorded second picture data of a previous frame, to determine a first part that is in the first picture data and that does not change with respect to the second picture data and a second part that is in the first picture data and that changes with respect to the second picture data," (col 4, lines 52-76; col 5, lines 1-27). The disclosed previous frame and current frame read on the ith frame and (i+1)th frame respectively.
"During comparison of the first picture data of the current frame with the second picture data of the previous frame,... data that is used to describe a same static object and that is in the first picture data and the second picture data is compared, to determine whether a virtual space position and a status of the static object change, for example, whether the static object changes from the static state to the moving state and whether a structure/shape changes," (col 6, lines 9-27). Determining whether a virtual space position and a status of the static object change reads on determining object changed content. The “same static object” reads on a first rendering object.
"For example, a picture in this embodiment may be understood as a 3D game picture or a 3D animation picture with a fixed field of view," (col 5, lines 32-46). Games are interactive, and thus the disclosed frames correspond to an interaction picture.); rendering,
based on the object changed content, a first rendering result corresponding to the first rendering object; and th frame of interaction picture (“A game picture shown in FIG. 3b is a next frame of picture of a game picture shown in FIG. 3a. …Due to movement of a moving object 34, a visual range in FIG. 3b changes with respect to that in FIG. 3a…. In comparison with FIG. 3a, an object 33 is a new object, and does not have a corresponding rendering result in FIG. 3a. Therefore, the object 33 and the moving object 34 need to be re-rendered together,” (col 6, lines 28-55; Fig. 3a; Fig. 3b). Because object 33 and object 34 changed between consecutive frames, a new rendering result for these objects was rendered to obtain the next frame. Either object 33 or 34 can read on the first rendering object.), the second rendering result being rendering result corresponding to a second rendering object that is not changed between the (i+1)th frame of interaction picture and the ith frame of interaction picture (“During rendering of the picture in FIG. 3b, corresponding objects in FIG. 3b and FIG. 3a are compared. For FIG. 3b and FIG. 3a, positions and statuses of a static object 31 and a light source 32 in virtual space do not change, and therefore rendering results of the static object 31 and the light source 32 in FIG. 3a may be reused.” Object 31 reads on a second rendering object.).
Zhu does not teach changing the rendering of a first rendering object in response to an interaction instruction by a user of the computer device.
Iyer teaches changing the rendering of a first rendering object in response to an interaction instruction by a user of the computer device (“The rendering device 102 dynamically renders VR content in response to user interactions with one or more objects within the VR environment. User interactions may include, but are not limited to user actions or attributes determined through multiple modalities, such as cameras (for gestures, expressions, or movements), microphone, or haptic gloves (touch sensors). The user interactions may also include any type of user intervention,” (page 2, para [0025]). "The VR content may include VR objects which may be stored in an order in which the VR objects are to be rendered to the user, considering provisioning for user interactions from the user or other actors (for example, a virtual chef operating the mixer grinder)," (page 7, para [0067]). Iyer discloses that VR content may include VR objects, thus objects are rendered in response to user interactions. The user interactions read on the interaction instruction by a user of the computer device.).
Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Iyer to Zhu. The motivation would have been to facilitate real-time change of the content, in response to user interactions, in order to provide seamless realistic experience to a user.
Zhu in view of Iyer does not explicitly teach that the first rendering result is a first object layer nor that the second rendering result is a second object layer. Further, Zhu in view of Iyer does not explicitly teach performing overlaying processing on the first object layer and a second object layer.
Lengyel teaches that the first rendering result is a first object layer and that the second rendering result is a second object layer (“The layered pipeline separates or factors the scene into layers that represent the appearance of an object (e.g., a space ship separate from the star field background) or a special lighting effect (e.g., a shadow, reflection, highlight, explosion, or lens flare.) Each layer produces a 2D-image stream as well as a stream of 2D transformations that place the image on the display. We use sprite to refer to a layer’s image (with alpha channel) and transformation together," (page 2, section 1 Introduction, para 1-2). "Geometry factoring should consider the following properties of objects and their motions:
1. Relative velocity – A sprite that contains two objects moving away from each other must be updated more frequently than two sprites each containing a single object (Figure 3). Relative velocity also applies to shading.
2. Perceptual distinctness – Background elements require fewer samples in space and time than foreground elements, and so must be separated into layers to allow independent control of the quality parameters," (page 3, section 2 Factoring, para 1; page 3, section 2.1 Factoring Geometry, para 1-4; page 3, Fig 3). A sprite / layer reads on an object layer. Lengyel discloses that the background layers are sampled less frequently than foreground elements, thus it would occur that foreground layers are updated while a background layer is not changed between frames. A layer that contains a background reads on second object layer. Foreground layers/ sprites that are updated more frequently read on a first object layer.).
Further, Lengyel teaches performing overlaying processing on the first object layer and a second object layer ("The resulting sprites with alpha channel are compressed and written to sprite memory. In parallel with 3D rendering, for every frame, the compositor applies an affine warp to each of an ordered set of sprites uncompressed from sprite memory and composites the sprites just ahead of the video refresh,"(page 3, section 1.1 The Layered Pipeline and Talisman, para 1-2; page 2, Fig 2). Compositing the layers reads on overlaying processing.).
Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Lengyel to Zhu in view of Iyer. The motivation would have been to “better [exploit] coherence by separating fast-moving foreground objects from slowly changing background layers,” (page 2, section 1 Introduction, para 3). Additional motivation would have been to “more optimally [target] rendering resources by allowing less important layers to be degraded to conserve resources for more important layers,” (page 2, section 1 Introduction, para 3). Further motivation would have been to “naturally [integrate] 2D elements such as overlaid video, offline rendered sprites, or handanimated characters into 3D scenes,” (page 2, section 1 Introduction, para 3).
Regarding claims 1 and 17, they are rejected using the same citations and rationales described in the rejection of claim 9. Claim 17 additionally cites A non-transitory computer-readable storage medium storing at least one program therein, the at least one program, when executed by a processor of a computer device, causing the computer device to implement a picture rendering method. Zhu teaches A non-transitory computer-readable storage medium storing at least one program therein, the at least one program, when executed by a processor of a computer device, causing the computer device to implement a picture rendering method (Zhu; “An embodiment of this application further provides a computer-readable storage medium. The computer-readable storage medium stores a computer program. When the computer program is run on a computer, the computer is enabled to perform the picture rendering method in the foregoing embodiments,” (col 10, lines 33-38).).
Regarding claim 10, Zhu in view of Iyer in further view on Lengyel teaches the computer device according to claim 9, wherein the rendering, based on the object changed content, a first object layer corresponding to the first rendering object comprises: determining a layer rendering manner based on the object changed content; and rendering, in the layer rendering manner, the first object layer corresponding to the first rendering object (Lengyel; "To take advantage of frame-to-frame coherence, we generalize layer factorization to apply to both dynamic geometric objects and terms of the shading model, introduce new ways to trade off fidelity for resource use in individual layers, and show how to compute warps that reuse renderings for multiple frames. We describe quantities, called fiducials, that measure the fidelity of approximations to the original image. Layer update rates, spatial resolution, and other quality parameters are determined by geometric, photometric, visibility, and sampling fiducials weighted by the content author’s preferences," (page 2, Abstract, para 2).
"Fiducials measure the fidelity of the approximation techniques. Our fiducials are of four types. Geometric fiducials measure error in the screen-projected positions of the geometry. Photometric fiducials measure error in lighting and shading. Sampling fiducials measure the degree of distortion of the image samples. Visibility fiducials measure potential visibility artifacts," (Lengyel; page 8, section 5 Fiducials, para 1-2). The fiducials read on object changed content since they reflect what changed about object between frames.
"The fiducial threshold provides a cutoff below which no attempt to re-render the layer is made (i.e., the image warp approximation is used). The regulator considers each frame separately, and performs the following steps:
1. Compute warp from previous rendering.
2. Use fiducials to estimate benefit of each warped layer.
3. Estimate rendering cost of each layer.
4. Sort layers according to benefit/cost.
5. Use fiducial thresholds to choose which layers to re-render.
6. Adjust parameters of chosen layers to fit within budget.
7. Render layers in order, stopping when all resources are used," (Lengyel; page 8, section 6 Regulation, para 1-3). The layer rendering manners include re-rendering the layer and reusing the layer with the image warp approximation. The regulator determines the layer rendering manner in step 5 by "using fiducial thresholds to choose which layers to re-render," (Lengyel; page 8, section 6 Regulation, para 1-3). Figure 14, further supports these findings. "if error exceeds threshold re-render and cache current positions of characteristic points else display sprite with computed transformation,” (Lengyel; page 6, Fig. 14). Lengyel’s step 7 reads on rendering in the layer rendering manner, as it renders consequently to step 5. This process is done for each layer in each frame; thus, it is done for the first object layer.).
Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Lengyel to Zhu in view of Iyer. The motivation would have been to “[allow] rendering resources to be used for other tasks,” (page 8, section 6 Regulation, para 2). In other words, the motivation would have been to improve resource efficiency.
Regarding claim 2, it is rejected using the same citations and rationales described in the rejection of claim 10.
Regarding claim 11, Zhu in view of Iyer in further view on Lengyel teaches the computer device according to claim 10, wherein the determining a layer rendering manner based on the object changed content comprises: when the object changed content indicates that a display status of the first rendering object is changed, determining that the layer rendering manner is a layer re-rendering (Lengyel; “The fiducial threshold provides a cutoff below which no attempt to re-render the layer is made (i.e., the image warp approximation is used). The regulator considers each frame separately, and performs the following steps:… 5. Use fiducial thresholds to choose which layers to re-render... Sprites that have been selected for re-rendering [step 5] are allocated part of this total budget in proportion to their desired area divided by the total desired area of the selected set,” (page 8, section 6 Regulation, para 1-3). Exceeding the fiducial threshold reads on change in display status.);
when the object changed content indicates that a display position of the first rendering object is changed, determining that the layer rendering manner is layer coordinate adjustment (Lengyel; "Geometric fiducials measure error in the screen-projected positions of the geometry," (page 8, section 5 Fiducials, para 1). Screen-projected positions read on display positions.
"Let ^P be a set of characteristic points from an initial rendering, let P be the set of points at the current time, and let W be the warp computed to best match $P to P. The geometric fiducial is defined as
PNG
media_image1.png
32
123
media_image1.png
Greyscale
," (Lengyel; page 8, section 5.1 Geometric Fiducials). In this step a change in the screen-projected position is detected.
"Once created, the image can be warped in subsequent frames to approximate its underlying motion, until the approximation error grows too large. Although the discussion refers to the Talisman reference architecture with its 2D affine image warp, the ideas work for other warps as well," (Lengyel; page 5, section 3 Image Rendering, para 1). Underlying motion necessitates a change in display position. Affine image warp approximates this change, and thus affine transformation becomes the layer rendering manner.
"A 2D affine transform is represented by a 2x3 matrix, where the right column is translation and the left 2x2 is the rotation, scale, and skew," (Lengyel; page 6, section 4.1 Affine Warp, para 1-5; page 6, section 4.2 Comparison of Warps, para 1-4). The affine image warp reads on coordinate adjustment as translation and rotation are coordinate adjustments.); and
when the object changed content indicates that a picture perspective is changed, determining that the layer rendering manner is at least one of layer coordinate adjustment and layer size adjustment (Lengyel; "Each series involved the animation of a moving rigid body and/or moving camera to see how well image warping approximates 3D motion. We tried several types of rigid bodies, including nearly planar and non-planar examples. We also tried many animated trajectories for each body including translations with fixed camera, translations accompanied by rotation of the body along various axes with various rotation rates, and head turning animations with fixed objects. The types of 2D image warps considered were 1. pure translation, 2. translation with isotropic scale, 3. translation with independent scale in x and y, 4. general affine, and 5. general perspective," (page 6, section 4.2 Comparison of Warps, para 1-4; page 6, Figure 14).
The moving camera causes the picture perspective to be changed. Lengyel discloses determining which warp-type to use with the algorithm in Figure 14. Each warp-type reads on a layer rendering manner. Pure translation and general affine reads on layer coordinate adjustment. Translation with isotropic scale and translation with independent scale in x and y reads on layer size adjustment.).
Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Lengyel to Zhu in view of Iyer. The motivation would have been “to minimize the number of renderings by approximating with an image warp of a particular type,” (Lengyel; page 6, section 4.2 Comparison of Warps, para 3). Additional motivation would have been to improve resource efficiency.
Regarding claim 3, it is rejected using the same citations and rationales described in the rejection of claim 11.
Regarding claim 13, Zhu in view of Iyer in further view of Lengyel teaches the computer device according to claim 9, wherein the method further comprises: disassembling a displayed object affected by the interaction instruction into a plurality of rendering objects including the first rendering object and performing layer division on an interaction picture related to the interaction process based on the plurality of rendering objects to obtain a plurality of object layers including the first object layer and the second object layer (Lengyel;
PNG
media_image2.png
548
620
media_image2.png
Greyscale
"Figure 24: CHICKEN CROSSING sequence used 80 layers, some of which are shown separately (left and bottom) and displayed in the final frame with colored boundaries (middle). The sprite sizes reflect their actual rendered resolutions relative to the final frame. The rest of the sprites (not shown separately) were rendered at 40-50% of their display resolution. Since the chicken wing forms an occlusion cycle with the tailgate, the two were placed in a single sprite (bottom)," (page 11, Fig. 24).
"2. Perceptual distinctness – Background elements require fewer samples in space and time than foreground elements, and so must be separated into layers to allow independent control of the quality parameters," (Lengyel; page 3, section 2 Factoring, para 1; page 3, section 2.1 Factoring Geometry, para 1-4; Fig 3). Lengyel discloses that the background layers are sampled less frequently than foreground elements, thus it would occur that foreground layers are updated while a background layer is not changed between frames.
The chicken, in Lengyel’s Figure 24, reads on a displayed object. It is clear from Figure 24 that the chicken has been disassembled into a head, far wing, body, and near wing. The chicken body parts read on a plurality of rendering objects. Any one of the chicken body parts reads on a first rendering object.
The layers, in Lengyel’s Figure 24, which are shown separately (left and bottom) and displayed in the final frame with colored boundaries (middle) as well as the rest of the sprites (not shown separately) read on a plurality of object layers. The layers which are shown separately (left and bottom) and are clearly based on the chicken body parts; any one of these chicken-body-part-based layers is in the foreground and reads on a first object layer. The rest of the sprites (not shown separately) were rendered at 40-50% of their display resolution, which supports that they are background layers, reading on second object layer. The layer division is based on the plurality of rendering objects since each foreground layer depicts one chicken body part (one rendering object), and the background layers are depicted separately from the chicken object.
Iyer; “The rendering device 102 dynamically renders VR content in response to user interactions with one or more objects within the VR environment. User interactions may include, but are not limited to user actions or attributes determined through multiple modalities, such as cameras (for gestures, expressions, or movements), microphone, or haptic gloves (touch sensors). The user interactions may also include any type of user intervention,” (Iyer; page 2, para [0025]). "The VR content may include VR objects which may be stored in an order in which the VR objects are to be rendered to the user, considering provisioning for user interactions from the user or other actors (for example, a virtual chef operating the mixer grinder)," (Iyer; page 7, para [0067]). Iyer teaches an interaction picture since the VR content within the VR environment can be interacted with by a user.
After combination with Iyer, Lengyel’s animation becomes an interaction picture rendered in Iyer’s VR environment. When Iyer’s rendering device dynamically renders VR content in response to user interactions with one or more objects within the VR environment, Lengyel’s chicken becomes an object that is affected by a user interaction (user interaction reads on interaction instruction). The layer division is related to the interaction process because when Iyer’s rendering device dynamically renders VR content in response to user interactions, Lengyel’s layer-based rendering necessitates the distinction of layers.);
Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Lengyel to Zhu in view of Iyer. The motivation would have been to “allow[s] warping per coherent object,” (Lengyel; page 3, section 1.2 Previous work, para 1).
Regarding claims 5 and 19, they are rejected using the same citations and rationales described in the rejection of claim 13.
Regarding claim 14, Zhu in view of Iyer in further view of Lengyel teaches the computer device according to claim 9, wherein the method further comprises: performing overall picture rendering on the (i+1)th frame of interaction picture when a picture change amplitude based on the object changed content instructed by the interaction instruction is greater than a picture change amplitude instructed by another interaction instruction (Zhu; "During comparison of the first picture data of the current frame with the second picture data of the previous frame, corresponding parts in the first picture data and the second picture data may be compared. For example, a first visual range described in the first picture data is compared with a second visual range described in the second picture data, to determine whether a virtual space position and a size of the first visual range changes with respect to the second visual range; data that is used to describe a same static object and that is in the first picture data and the second picture data is compared, to determine whether a virtual space position and a status of the static object change, for example, whether the static object changes from the static state to the moving state and whether a structure/shape changes; and data that is used to describe the light source and that is in the first picture data and the second picture data is compared, to determine whether a virtual space position (for example, a height and an orientation) and a status (for example, an illumination angle and illumination intensity) of the light source change. For example, it is assumed that FIG. 3a and FIG. 3b are a schematic diagram of two frames of game pictures according to an embodiment of this application...When a specific rendering operation is performed, a rendering result, in FIG. 3a, corresponding to a part that does not change may be copied to a preset memory buffer (for example, a frame buffer, Framebuffer) for reuse, and on the basis of the reused rendering result, incremental rendering is performed on a part that changes, to obtain a rendering result of the picture in FIG. 3b," (col 6, lines 9-55; Fig. 3a; Fig. 3b). The picture change amplitude is mapped to how a change, such as in the visual range, virtual space position of a static object, and /or a status of a static object between frames is reflected in the rendering result image. For example, when the object moves (change in virtual space position), there will be a picture changed amplitude. The picture changed amplitude is based on the object changed content because a change in a virtual space position and /or a status of a static object change reads on object changed content. Overall picture rendering is mapped to the rendering that produces the rendering result of the picture in Figure 3b. Figure 3b shows that the overall picture is rendered. The overall picture rendering includes the incremental rendering.
When the movement in the virtual space position of an object is greater than in a previous frame, then the picture change amplitude of the current frame is greater than the previous frame. The same algorithm is used for multiple consecutive frames, and thus when the picture change amplitude of the current frame is greater than the previous frame, the overall picture rendering is performed.).
Regarding claim 6, it is rejected using the same citations and rationales described in the rejection of claim 14.
Claims 4, 12, and 18 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu in view of Iyer in further view of Lengyel in further view of Zhao (CN 113244614 A; hereinafter Zhao) in further view of Peng et al. (US 20200005736 A1; hereinafter Peng).
Regarding claim 12, Zhu in view of Iyer in further view of Lengyel fails to explicitly teach but Zhao teaches the computer device according to claim 9, wherein the performing overlaying processing on the first object layer and a second object layer, to obtain the (i+1)th frame of interaction picture comprises: determining layer transparency Zhao; "In one possible implementation, in response to the image picture is a virtual scene picture, the first image element comprises an icon superimposed on the virtual scene picture, a button pattern corresponding to the virtual control and at least one of the pattern containing the text content; The second image element includes an image for displaying the virtual scene in the virtual scene image," (page 4).
"In one possible implementation, in response to the display mode is transparency synthesis display, the first interaction instruction comprises a first interaction parameter, the second interaction instruction comprises a second interaction parameter, the first interaction parameter and the second interaction parameter comprises the transparency information of the image element corresponding to each other; the terminal based on the transparency information of at least one first image element and at least one second image element, determining the transparency of at least one first image element and at least one second image element; based on the transparency of at least one first image element and at least one second image element, synthesizing and displaying at least one first image element and at least one second image element, so as to display the image picture." (Zhao; page 17).
After combination with Lengyel, Zhao’s first image element becomes Lengyel’s foreground layers/ sprites that are updated more frequently, previously mapped to first object layer. Zhao’s second image element becomes Lengyel’s background layers which are sampled less frequently than foreground elements, previously mapped to second object layer.);
performing transparency adjustment on the first object layer and the second object layer based on the layer transparency (Zhao; "…determining the transparency of at least one first image element and at least one second image element; based on the transparency of at least one first image element and at least one second image element, synthesizing and displaying at least one first image element and at least one second image element, so as to display the image picture… Exemplary, display mode is transparency synthesis of the first image element and the second image element, it may be synchronous display, also may be independently displayed. if the first image element and the second image element are synchronously displayed, then the synchronous first image element and the second image element for transparency synthesis display; if the first image element and the second image element are independently displayed, the terminal can directly perform transparency synthesis after receiving the first image element and the second image element, displaying the synthesized image in the image picture. Through the above process, it can realize the first image element finished by the final rendering, and the second image element finished by the server rendering based on the respective transparency for image synthesis, the synthesized image is displayed in the image picture, improving the display effect of the composite image in the image picture,” (Zhao; page 17).
Transparency synthesis of the first image element and the second image element reads on performing transparency adjustment.);
Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Zhao to Zhu in view of Iyer in further view of Lengyel. The motivation would have been to “[improve] the display effect of the composite image in the image picture,” (Zhao; page 17).
Zhu in view of Iyer in further view of Lengyel in further view of Zhao fails to explicitly teach determining layer display orders of the first object layer and the second object layer.
Additionally Zhu in view of Iyer in further view of Lengyel in further view of Zhao fails to explicitly teach and performing, based on the layer display orders, overlaying processing on a first object layer and a second object layer that are obtained through the transparency adjustment, to obtain the (i+1)th frame of interaction picture.
Peng teaches determining layer display orders of the first object layer and the second object layer (Peng; “At an application framework layer, all layers (visible and invisible layers) form a list of layers, which is defined as ListAll. The layer synthesis module selects, from the ListAll, the visible layers to form a list of visible layers, which is defined as DisplayList. Thereafter, the layer synthesis module searches for an idle frame buffer (FB) from three reusable FBs in the system, and superimposes the visible layers in the DisplayList together in the idle FB through a synthesis operation according to application configuration information, so as to obtain a final picture to-be-displayed. The application configuration information may be, for example, which layer should be set at the bottom, which layer should be set at the top, which region should be visible, which region should be transparent, and so on,” (page2, para [0025]). The layer display order is determined by the application configuration information.).
Additionally Peng teaches and performing, based on the layer display orders, overlaying processing on a first object layer and a second object layer that are obtained through the transparency adjustment, to obtain the (i+1)th frame of interaction picture (Peng; “At an application framework layer, all layers (visible and invisible layers) form a list of layers, which is defined as ListAll. The layer synthesis module selects, from the ListAll, the visible layers to form a list of visible layers, which is defined as DisplayList. Thereafter, the layer synthesis module searches for an idle frame buffer (FB) from three reusable FBs in the system, and superimposes the visible layers in the DisplayList together in the idle FB through a synthesis operation according to application configuration information, so as to obtain a final picture to-be-displayed. The application configuration information may be, for example, which layer should be set at the bottom, which layer should be set at the top, which region should be visible, which region should be transparent, and so on,” (page2, para [0025]). Superimposing the layers reads on overlaying processing.
After combination with Lengyel, Zhao, Zhu, and Iyer, the visible layers in Peng’s DisplayList include Lengyel’s first object layer and a second object layer that have undergone Zhao’s transparency synthesis to obtain Zhu’s next frame of Iyer’s interaction picture.).
Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Peng to Zhu in view of Iyer in further view of Lengyel in further view of Zhao. The motivation would have been to ensure that objects meant to be visible are, in fact, visible.
Regarding claims 4 and 18, they are rejected using the same citations and rationales described in the rejection of claim 12.
Claim(s) 7, 15, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Zhu in view of Iyer in further view of Lengyel in further view of Kajiya et al. (US 5999189 A; hereinafter Kajiya).
Regarding claim 15, Zhu in view of Iyer in further view of Lengyel teaches the computer device according to claim 9, wherein the method further comprises: deleting a third object layer corresponding to a third rendering object displayed in the ith frame of interaction picture and not displayed in the (i+1)th frame of interaction picture (Zhu; "...A game picture shown in FIG. 3b is a next frame of picture of a game picture shown in FIG. 3a . ...Due to movement of a moving object 34, a visual range in FIG. 3 b changes with respect to that in FIG. 3a. A region in which a static object 36 is located is beyond the range of the picture. However, there is still a part of an overlapping region 35 between FIG. 3a and FIG. 3b. Therefore, a rendering result of the overlapping region 35 may be extracted from a rendering result in FIG. 3 a to render an overlapping region in FIG. 3b ...," (col 6, lines 28-55; Fig. 3a; Fig. 3b). Figure 3a and Figure 3b show adjacent frames. Static object 36 is located is beyond the range of the picture and is not reused in the next frame.
After combination with Lengyel, object 36 has its own sprite which reads on object layer. Since the displayed frame does not contain an image representation of object 36, its sprite/ object layer is deleted.); and
when a newly added fourth rendering object exists in the (i+1)th frame of interaction picture but not in the ith frame of interaction picture, Zhu; "...In comparison with FIG. 3a, an object 33 is a new object, and does not have a corresponding rendering result in FIG. 3a. Therefore, the object 33 and the moving object 34 need to be re-rendered together. When a specific rendering operation is performed, a rendering result, in FIG. 3a, corresponding to a part that does not change may be copied to a preset memory buffer (for example, a frame buffer, Framebuffer) for reuse, and on the basis of the reused rendering result, incremental rendering is performed on a part that changes, to obtain a rendering result of the picture in FIG. 3b," (col 6, lines 28-55; Fig. 3a; Fig. 3b). Object 33 reads on a newly added fourth rendering object, and it does not appear in the previous frame.).
Zhu in view of Iyer in further view of Lengyel does not teach, but Kajiya teaches searching a cache for a fourth object layer corresponding to the fourth rendering object (Kajiya; “In our system, multiple independent image layers may be composited together at video rates to create the output video signal. These image layers, which we refer to as generalized sprites, or gsprites, can be rendered into and manipulated independently. The system will generally use an independent gsprite for each non-interpenetrating object in the scene. This allows each object to be updated independently, so that object update rate can be optimized based on scene priorities," (col 5, lines 60-67; col 6, lines 1-3). The gsprite reads on object layer.
"The gsprite cache 452 stores decompressed, gsprite data (R G B) for sixteen 8x8 blocks. The data is organized so that 16 gsprite pixels can be accessed every clock cycle. The image processor address generator 454 is used to scan across each gsprite based on the specified affine transformation and calculate the filter parameters for each pixel. Gsprite cache addresses are generated to access gsprite data in the gsprite cache 452 and feed it to the gsprite filter engine 456. The image processor address generator 454 also controls the compositing buffer." (col 25, lines 52-61; Fig. 12B).
After combination, Zhu’s newly added object, object 33, has its own gsprite which reads on the fourth object layer. By scanning across each gsprite in the gsprite cache, each gsprite is searched for; after combination, this includes the fourth object layer.).
Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Kajiya to Zhu in view of Iyer in further view of Lengyel. The motivation would have been to increase rendering performance and reduce computational load.
Regarding claims 7 and 20, they are rejected using the same citations and rationales described in the rejection of claim 15.
Claims 8 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Zhu in view of Iyer in further view of Lengyel in further view of Agoston (US 20210362049 A1; hereinafter Agoston) in further view of Steeves et al. (US 20240144246 A1; hereinafter Steeves).
Regarding claim 16, Zhu in view of Iyer in further view of Lengyel does not teach but Agoston teaches the computer device according to claim 9, wherein the method further comprises: inputting first description information, second description information, third description information, and guidance information into a large language model, to obtain a reserved computing resource predicted by the machine learning model for rendering the first object layer corresponding to the first rendering object (Agoston; "The resource allocation agent 315 refers to a resource allocation model generated using machine learning algorithm to predict the types of resources and an amount of each type of resources required for execution of the online game," (page 4, para [0039]). The resource allocation model is a machine learning model.
"The training data includes users' inputs and game states of the online game, as well as a success criteria defined for the online game. A resource allocation model is generated using the training data, as illustrated in operation 920. The resource allocation model is generated using an artificial intelligence (AI) modeler that engages machine learning algorithm," (Agoston; pages 14-15, para [0088]- [0089]).
"The back-end servers provide a machine learning engine, for example, to analyze the game data generated in response to the inputs received from the users and train a resource allocation model (AI model) 315 for an online game that is selected for game play by the users. The AI model is initially generated from game data resulting from processing inputs provided by developers of the online game. The inputs provided by the developers may be from simulated game plays performed by the developer or inputs obtained from a controlled group of users playing the online game. The generated AI model is then trained on an ongoing basis using inputs from game plays of different users. The users' inputs are used to drive a game state of the online game," (Agoston; page 6, para [0047]).
The user’s inputs read on the first description information. The game data resulting from processing inputs provided by developers of the online game reads on the second description information. The game data generated in response to the inputs received from the users reads on third description information. The success criteria read on guidance information.
"Using the information provided by the resource allocation model, resources are allocated for executing the functional portions for the online game, as illustrated in operation 930. The functional portions perform select ones of game engine tasks related to specific ones of features of the game data. The allocated resources provide sufficient resources for processing the respective game engine tasks and the type and amount of resources allocated for executing the functional portions is dictated by the resource allocation model," (Agoston; page 15, para [0090]). The allocated resources read on reserved computing resource. The resource allocation model dictating the type and amount of resources allocated reads on prediction.
After combination with Lengyel, the resources are allocated as taught by Agoston for rendering Lengyel’s foreground layers/ sprites that are updated more frequently, which read on a first object layer corresponding to the first rendering object.); and
amplifying the reserved computing resource, and performing resource reservation on the amplified reserved computing resource (Agoston; "Using information provided by the resource allocation model, the resource allocation agent determines if the allocation of resources during a current game play is sufficient to meet the specified success criterion or if the system resources need to be scaled up or down. "(page 9, para [0056]).
" ...a signal may be sent by the resource allocation agent to the configuration agent to proactively scale up or scale down specific ones of the system resources for the current session of game play for the online game, based on the predicted resource demand" (Agoston; pages 9-10, para [0056] - [0057]).
"The signal to the configuration agent 311 may identify the type of resource and the type of adjustment (scaling up, scaling down, provisioning new, de-provisioning existing ones, etc.) that needs to be made to achieve the success criteria defined for the online game. The configuration agent 311, in response to the signal from the resource allocation agent 313, is configured to elastically adjust the resources appropriately," (Agoston; pages 9-10, para [0057]).
Scaling up system resources reads on amplifying the reserved computing resource. Elastically adjusting the resources appropriately reads on performing resource reservation.).
Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Agoston to Zhu in view of Iyer in further view of Lengyel. The motivation would have been to “…[provide] the flexibility to adapt to workload changes…,” (Agoston; page 7, para [0049]). Additional motivation would have been to use resources efficiently.
Zhu in view of Iyer in further view of Lengyel in further view of Agoston does not teach, but Steeves teaches the machine learning model can be implemented as a large language model (Steeves; "In an embodiment, the server 604 can use a local machine learning model such as a large language model, input the received text, and then output the continuation probabilities of the text in the form of a set of text phrase and probability pairs," (page 8, para [0081]). Steeves teaches that a large language model is a type of machine learning model; after combination, the machine learning model disclosed by Agoston is implemented as a large language model.).
Before the effective filling date of the claimed invention, it would have been obvious to one having ordinary skill in the art to apply the teachings of Steeves to Zhu in view of Iyer in further view of Lengyel in view of Agoston. The motivation would have been to enable more flexible and adaptive management of graphic resources.
Regarding claim 8, it is rejected using the same citations and rationales described in the rejection of claim 16.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ERICA G THERKORN whose telephone number is (571)272-2939. The examiner can normally be reached Monday - Friday 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Devona Faulk can be reached at 571-272-7515. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ERICA G THERKORN/Examiner, Art Unit 2618
/DEVONA E FAULK/Supervisory Patent Examiner, Art Unit 2618