DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect amended claims 1, 4, 11, 14 and filed on 01/29/2026 have been considered but they are not persuasive.
However, examiner found some amended limitations are taught by references previous introduced.
In Remark page 15, lines 14-15, lines 33-34 , page 16, lines 1-4, line 31, page 17, line 1, lines 23-24, applicant argued that Applicant respectfully traverses each of these rejections. Nonetheless, Applicant amends claim 1 to recite as follows: “wherein the effect resource data further comprises shader information corresponding to each of the effect processing units, and the shader information is updated to dynamically distribute shaders and modify processing logic of the particles, the processing logic comprising at least one of generation logic of the particles, update logic of the particles, or rendering logic of the particles”… Thus, Guo fails to disclose or suggest the features "the effect... Thus, Sebastian also fails to disclose or suggest at least the above recited features of the subject claim, and does not cure the deficiencies of Guo.
The examiner respectfully disagrees with Applicant’s argument. In fact, in Fig. paragraph [0050], Sebastian discloses “The processing memory blocks 242 can be processed by effects processors 243 (e.g. particle processors) to generate renderable output data 244, exploit a customized shader to support a larger number of transparent layers, the renderer 245 includes an in-place memory strategy for supplying information to shader that manage the rasterization. OpenGL can be used to implement shader code to manage the rasterization. The code can pick out information needed to create polygon vertices from the same memory blocks used by the particle engines, a set of particles associated with effects engine pre-processor 231 can be rendered at multiple locations in a scene by supplying a separate transform for each such location” and [0045] “FIG. 9 shows stack of particles parameter controls, particles are emitted according to a “DefaultMutiRing” control 210b and is rendered according to a predefined particle “pattern” control 210f. Those and/or other particle parameter controls can affect color profiles of the particles (e.g., particle colors, particle color palates, how color changes over the particles' life, etc.), shape profiles of particles (e.g., particles
shapes, textures, patterns, how shape changes over the particles ' life, etc.), sizes the particles (e.g., absolute and/or relative sizes, how sizes change over the particles' life)” Sebastian teaches shader information corresponding to each of the effect processing units (shader information needed to create polygon vertices from the same memory blocks used by the particle engines e.g., effects processors 243, particle processors), and the shader information is updated to dynamically distribute shaders (rendered at multiple locations in a scene by supplying a separate transform for each such location) and modify processing logic of the particles includes at least rendering logic of the particles (rendered according to a predefined particle “pattern” control 210f. Fig. 9).
Independent claims 11 and 14 have been amended similarly to claim 1 and are rejected as the explanation above.
Dependent claims 2-9, 15-23, depend on independent claims 1, 11 and 14 and rejected as current rejection.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-7, 11, 14-20 and 23 are rejected under 35 U.S.C. 103 as being unpatentable by Guo et al. (U.S. 2024/0037830 A1) in view of Sebastian et al. (U.S. 2017/0091983 A1).
Regarding Claim 1 (Currently amended), Guo discloses an effect processing method (Guo, [0002] “a method for generating a firework visual effect”) comprising:
acquiring effect resource data comprising a self-defined effect logic, the effect logic being used for specifying at least one effect processor and at least two associated effect processing units included in each effect processor (Guo,[0037] “based on the visual effect trajectory,…generating the firework visual effect” and [0038], [0039] “obtain a firework particle primitive model set, obtain a trail particle primitive model set” and [0007] “when run by the processor, implement the method for generating the firework visual effect” and [0044] “particle both the firework particle set and the trail particle set are generated based on the GPU technology or the CPU particle technology” and [0188] “The processor 400 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU)” Guo teaches acquiring effect resource data (the visual effect trajectory) include a self-defined effect logic (e.g., a firework particle primitive model set, a trail particle primitive model set) being used for specifying at least an effect processor (the CPU particle) and an effect processing processor (GPU particle) ;
generating the effect processor and the effect processing units according to the effect logic, each effect processor corresponding to a set of particles, and the effect processing units being used for processing the particles through a shader of a Graphic Processing Unit GPU (Guo, [0027] CPU (Central Processing Unit) particles and GPU (Graphics Processing Unit) particles are two technical means to implement the particle effect” and [0037] “generating a firework particle set and a trail particle set in a three-dimension space for generating the firework visual effect” and [0044] “both the firework particle set and the trail particle set are generated based on the GPU particle technology or the CPU particle technology” Guo teaches generating a set of particles by an effect processor (CPU particle) and processing the particles by an effect processing unit (GPU particle); and
according to an association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor, to obtain an effect picture (Guo, Fig. 1, [0037] Step S20, generating a firework particle set and a trail particle set in a three-dimension space” and [0038], [0039] “Step S30: rendering the firework particle set, Step S40: rendering the trail particle set” [0127] “one trail particle map group further includes a strip map group; the strip map group includes at least one strip particle map; in step S40, generating at least one patch model in one-to-one correspondence with the at least one strip particle, based on the at least one strip particle” and [0157] “the first trail particle and the second trail particle in the firework particle set and the trail particle set may be generated by using the GPU particle technology, and the strip particle in the trail particle set may be generated by using the CPU particle technology” Guo teaches according to an association relationship (one trail particle map group further includes a strip map group), sequentially invoking the effect processing units to process particles (GPU particle is used in step 30), corresponding to the effect processor (CPU particle is used in step 40) wherein each of the effect processing units is used for performing one of the following types of processing: generating the particles, updating attributes of the particles, and rendering the particles according to the attributes of the particles, and wherein the particles are display objects of geometry (Guo, [0065] “In a particle lifetime of each firework particle, updating the firework particle attribute values corresponding to the firework particle according to a rendering frame rate, and rendering the firework particle based on the updated firework particle attribute values and the corresponding firework particle map group” and [0041] “the firework visual effect and the target object are superimposed for display in the video” and [0012] “FIG. 2B spark particle maps in a spark particle map group provided” and [0013] FIG. 2C first trail maps in a first trail map group provided and [0015] FIG. 2E, a strip particle map provided” Guo teaches the GPU particle performs : generating the particles, updating attributes of the particles, and rendering the particles according to the attributes of the particles, and wherein the particles are display objects of geometry (Figs, 2B, 2C, 2E).
Guo discloses shadow effects ultimately presented by these three-dimension models are defined (Guo, [0003]).
However, Guo does not explicitly teach , the effect logic being used for specifying at least one effect processor and at least two associated effect processing units.
the effect processing units being used for processing the particles through a shader of a Graphic Processing Unit GPU.
wherein the effect resource data further comprises shader information
corresponding to each of the effect processing units, and the shader information is updated to dynamically distribute shaders and modify processing logic of the particles, the processing logic comprising at least one of generation logic of the particles, update logic of the particles, or rendering logic of the particles.
Sebastian teaches the effect logic being used for specifying at least one effect processor and at least two associated effect processing units (Sebastian, [0131] “FIG. 7 a method 700 of generating a percussive a visual effect” and [0132] “The parameters may select the particle system to be triggered at stage 713 so that the artist has a choice of different percussive effects. Such a technique can manifest a similar visual effect to that of fireworks display patterns” and [0049] “using a block-based approach can enable a single CPU processing step to serve many particles (e.g., AllocBlockSize). Particles in blocks can all be assigned to a same GPU work group, thereby sharing same parameters and a same instruction pipeline” Sebatian teaches a percussive event models can generate the particle system for different visual effects and use one effect processor (CPU) and at least two associated effect processing units (GPUs).
the effect processing units being used for processing the particles through a shader of a Graphic Processing Unit GPU (Sebastian, [0050] “the renderer 245
includes an in-place memory strategy for supplying information to shaders that manage the rasterization, OpenGL can be used to implement shader code to manage the rasterization. The code can pick out information…from the same memory blocks used by the particle engines” and [0051] “rendering functions of the GPU can operate in OpenGL” Sebastian teaches the effect processing unit (GPU) renders the particles via a shader that manage the rasterization.
wherein the effect resource data further comprises shader information
corresponding to each of the effect processing units, and the shader information is updated to dynamically distribute shaders and modify processing logic of the particles, the processing logic comprising at least one of generation logic of the particles, update logic of the particles, or rendering logic of the particles (Sebastian, [0050] “The processing memory blocks 242 can be processed by effects processors 243 (e.g. particle processors) to generate renderable output data 244, exploit a customized shader to support a larger number of transparent layers, the renderer 245 includes an in-place memory strategy for supplying information to shader that manage the rasterization. OpenGL can be used to implement shader code to manage the rasterization. The code can pick out information needed to create polygon vertices from the same memory blocks used by the particle engines, a set of particles associated with effects engine pre-processor 231 can be rendered at multiple locations in a scene by supplying a separate transform for each such location” and [0045] “FIG. 9 shows stack of particles parameter controls, particles are emitted according to a “DefaultMutiRing” control 210b and is rendered according to a predefined particle “pattern” control 210f. Those and/or other particle parameter controls can affect color profiles of the particles (e.g., particle colors, particle color palates, how color changes over the particles' life, etc.), shape profiles of particles (e.g., particles shapes, textures, patterns, how shape changes over the particles ' life, etc.), sizes the particles (e.g., absolute and/or relative sizes, how sizes change over the particles' life)” Sebastian teaches shader information corresponding to each of the effect processing units (shader information needed to create polygon vertices from the same memory blocks used by the particle engines e.g., effects processors 243, particle processors), and the shader information is updated to dynamically distribute shaders (rendered at multiple locations in a scene by supplying a separate transform for each such location) and modify processing logic of the particles includes at least rendering logic of the particles (rendered according to a predefined particle “pattern” control 210f. Fig. 9).
Guo and Sebastian are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Guo to combine with the effect logic is used by an effect processor (CPU) and at least two effect processing units (GPU) (as taught by Sebastian) in order to use an effect processor (CPU) and at least two effect processing units (GPU) for the effect logic because Sebastian can provide a percussive event models can generate the particle system for different visual effects and use one effect processor (CPU) and at least two associated effect processing units (GPUs) (Sebastian, [0131], [0132]). Doing so, it may provide the user can perform real-time synthesis (e.g., generation of, and control over) visual elements in the environment and/or simply experience playback of such a synthesized (live or recorded) performance (Sebastian, [0027]).
Regarding Claim 2, a combination of Guo and Sebastian discloses the method of claim 1, wherein the at least one effect processor includes a first effect processor and a second effect processor which are associated with each other and respectively used to process at least one group of first particles and at least one group of second particles which are associated with each other; the effect picture includes a first sub-picture and a second sub-picture (Guo, [0157] “the strip particle in the trail particle set may be generated by using the CPU particle technology” and [0015] FIG. 2E is a schematic diagram of a strip particle map” and Fig. 2D [0127] “rendering the ribbon model based on the at least one strip particle map, to obtain a strip particle primitive model” Guo teaches a first effect processor (1st CPU) process a group of first particles (a strip particle map) includes an effect first sub-picture, Fig. 2E and a second effect processor (2nd CPU) process a group of second particles which are associated with each other (rendering the ribbon model based on the at least one strip particle map) includes an effect second sub-picture, Fig. 2D, and according to an association relationship between effect processing units included in the first effect processor, and sequentially invoking the effect processing units to process the particles corresponding to the effect processor to obtain the effect picture comprises:
according to an association relationship between the effect processing units included in the first effect processor, sequentially invoking the effect processing units to process the first particles to obtain the first sub-picture (Guo, [0157] “the first trail particle and the second trail particle in the firework particle set and the trail particle set may be generated by using the GPU particle” and [0125] The plurality of trail particles further include at least one strip particle; the at least one strip particle is used for generating a strip portion in the firework trail” and [0015] FIG. 2E is a schematic diagram of a strip particle map” Guo teaches an association relationship between the effect processing units (GPU) included in the first effect processor (CPU) e.g., the at least one strip particle is used for generating a strip portion in the firework trail to obtain the 1st sub-picture Fig. 2E;
taking particle identifiers of the first particles as an input to the second effect processor, and according to the association relationship between effect processing units included in the second effect processor, sequentially invoking the effect processing units to process the second particles to obtain the second sub-picture (Guo, [0126] With respect to the strip particle, a particle rendering mode, may be adopted to render the strip particle, so as to obtain an elongated ribbon model generated along the extension trajectory” and Fig. 2D, [0127] “rendering the ribbon model based on the at least one strip particle map” Guo teaches taking particle identifier (a strip particle) as an input to the second effect processor (2nd CPU) to process the second particle (rendering the ribbon model) to obtain the 2nd sub-picture, Fig. 2D.
Regarding Claim 3, a combination of Guo and Sebastian discloses the method of claim 2, wherein the effect processing units in the effect processor include a particle processing unit and a particle rendering unit;
taking particle identifiers of the first particles as an input to the second effect processor, and according to the association relationship between the effect processing units included in the second effect processor, sequentially invoking the effect processing units to process the second particles to obtain the second sub-picture comprises:
taking particle identifiers of the first particles as an input to the particle processing unit, and invoking the particle processing unit to process the second particles, wherein the particle processing unit is used for performing one of the following types of processing: generating the second particles, and updating attributes of the second particles (Guo,[0157] “the trail particle set may be generated by using the GPU particle” and [0103] acquiring at least one trail particle map group corresponding to the trail particle set; respectively rendering the plurality of trail particles, based on the at least one trail particle map group” and [0104] In a particle lifetime of each trail particle, updating the trail particle attribute values corresponding to the trail particle according to the rendering frame rate” Guo teaches taking particle identifiers of the first particles (acquiring at least one trail particle map group) as an input to the particle processing unit (GPU) to generating the second particles (rendering the plurality of second trail particles, based on the at least one trail particle map group), updating attributes of the second particles (according to the rendering frame rate) ;
invoking the particle rendering unit to render the second particles according to the attributes of the particles to obtain the second sub-picture (Guo, [0123] in FIG. 2C, when a certain first trail particle is subjected to multiple renderings, during a first rendering, the first trail map T11 is used for rendering; then, during a second rendering, if the map needs to be switched, the first trail map T12 may be used for rendering” Guo teaches invoking the particle rendering unit, then, during a second rendering, the first trail map T12 may be used for rendering to obtain the second sub-picture (Fig. 2C).
Regarding Claim 4 (Currently amended), the method of claim 1, Guo does not explicitly teach
upon invoking an effect processing unit according to the association relationship between the effect processing units included in the effect processor, uploading a corresponding shader according to the shader information;
invoking the shader to process the particles to obtain the effect picture.
However, Sebastian teaches upon invoking an effect processing unit according to the association relationship between the effect processing units included in the effect processor, uploading a corresponding shader according to the shader information (Sebastian [0050] “, the renderer 245 includes an in-place memory strategy for supplying information to shaders that manage the rasterization. For example, OpenGL (or DirectX, or other suitable frameworks) can be used to implement shader code to manage the rasterization. The code can pick out information needed to create polygon vertices from the same memory blocks used by the particle engines” Sebastian teaches uploading a corresponding shader according to the shader information e.g., supplying information to shaders that manage the rasterization, OpenGL (or DirectX, or other suitable frameworks) can be used to implement shader code to manage the rasterization…;
invoking the shader to process the particles to obtain the effect picture (Sebastian, [0050] “the renderer 245 includes an in-place memory strategy for supplying information to shaders that manage the rasterization…The code can pick out information needed to create polygon vertices from the same memory blocks used by the particle engines… each particle associated with a parameter stack output 220 or effects engine pre-processor 231 can be rendered as a separate “billboard” polygon” and [0151] “ FIG. 14 shows an example image 1400 of an object placement… A series of rendered 3D objects shaped like balls appear to the left and right underneath a viewer” Sebastian teaches the shader processes (renders) the particles to obtain the effect picture (Fig. 14).
Guo and Sebastian are combinable see rationale in claim 1.
Regarding Claim 5, a combination of Guo and Sebastian discloses the method of claim 1, wherein before according to an association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor, to obtain an effect picture, the method further comprises:
receiving externally-input data sent by a Central Processing Unit (CPU), the externally-input data comprising at least one of: data extracted by the CPU from a user input instruction, and data obtained after converting the extracted data (Guo, [0157] “the strip particle in the trail particle set may be generated by using the CPU particle” [0135] “a strip particle map provided, by adopting the strip particle map shown in FIG. 2E to render the ribbon model” and [0129] “FIG. 2D, it may be seen that the ribbon model may be formed by splicing a plurality of triangular patch models together” and [0131] “acquiring a preset ribbon model, wherein the preset ribbon model comprises a plurality of patch models; adjusting the plurality of patch models of the preset ribbon model, based on the at least one strip particle and attribute values of trail particle attributes corresponding to the at least one strip particle” Guo teaches receiving externally-input data sent by the CPU e.g., the strip particle map as an user input, and data obtained after converting the extracted data e.g., adopting the strip particle map shown in FIG. 2E to render the ribbon model FIG. 2D; and
wherein according to an association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor comprises:
taking the externally-input data as an input to the effect processor, and according to the association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor to obtain the effect picture (Guo, [0093] when the visual effect trajectory is determined according to the movement of the fingertip as the visual trajectory point, and then according to such a generation mode of trail particle, an effect of the firework trail that moves with the movement of the fingertip may be generated” and Fig. 1, [0094] “when the visual effect trajectory is a preset trajectory, in step S20, generating the plurality of trail particles at equal separation distances or generating the plurality of trail particles at random separation distances along the extension trajectory and for each trail particle among the plurality of trail particles, setting attribute values of trail particle attributes corresponding to the trail particle” and [0039] Step S40: rendering the trail particle set, to obtain a trail particle primitive model set and [0089] “As shown in FIG. 2B, spark maps may include spark map S01 to spark map S16. When rendering any spark particle, one spark map may be randomly selected from the 16 spark maps for rendering and displaying” Guo teaches taking the externally-input data e.g., separation distances of the movement of the fingertip as the visual trajectory point, generating the plurality of trail particles, and rendering the trail particle set, to obtain a trail particle primitive model set e.g., spark maps may include spark map S01 to spark map S16. When rendering any spark particle to obtain the effect picture, Fig. 2B.
Regarding Claim 6, a combination of Guo and Sebastian discloses the method of claim 5, wherein the effect processing unit in the effect processor comprises: a particle processing unit and a particle rendering unit, and wherein taking the externally-input data as an input to the effect processor, and according to the association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor to obtain the effect picture comprises:
taking the externally-input data as an input to the particle processing units, and invoking the particle processing units to process the particles, wherein the particle processing unit is used for performing one of: generating the particles (Guo, [0181] “the firework visual effect may also be a trajectory drawn in real time by the user on the display screen” and [0093] when the visual effect trajectory is determined according to the movement of the fingertip as the visual trajectory point, and then according to such a generation mode of trail particle, an effect of the firework trail that moves with the movement of the fingertip may be generated” and Fig. 1, [0094] “when the visual effect trajectory is a preset trajectory, in step S20, generating the plurality of trail particles at equal separation distances or generating the plurality of trail particles at random separation distances along the extension trajectory and for each trail particle among the plurality of trail particles, setting attribute values of trail particle attributes corresponding to the trail particle” Guo teaches taking the externally-input data e.g., separation distances of the movement of the fingertip as the visual trajectory point, generating the plurality of trail particles, setting attribute values of trail particle attributes corresponding to the trail particle) or updating attributes of the particles;
invoking the particle rendering unit to render the particles according to the attributes of the particles to obtain the effect picture (Guo, Fig. 1, [0038] Step S30: rendering the firework particle set, to obtain a firework particle primitive model set.
[0039] Step S40: rendering the trail particle set, to obtain a trail particle primitive model set and [0089] “As shown in FIG. 2B, spark maps may include spark map S01 to spark map S16. When rendering any spark particle, one spark map may be randomly selected from the 16 spark maps for rendering and displaying” Guo teaches rendering particles according to the attributes of the particles (the spark particle S01 to S16) to obtain the effect picture (Fig. 2B);
Regarding Claim 7, a combination of Guo and Sebastian discloses the method of claim 1, wherein the effect processing unit in the effect processor comprises: a particle generating unit, a particle updating unit and a particle rendering unit, and wherein according to an association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor comprises:
invoking the particle generating units to generate the particles (Guo, [0037] Step S20: based on the visual effect trajectory, generating a firework particle set and a trail particle set in a three-dimension space for generating the firework visual effect” and [0044] “both the firework particle set and the trail particle set are generated based on the GPU particle” Guo teaches a GPU generates the particles; invoking the particle updating unit to update attributes of the particles (Guo, [0065] “In a particle lifetime of each firework particle, updating the firework particle attribute values corresponding to the firework particle according to a rendering frame rate” Guo teaches updating attributes of the particles ;
invoking the particle rendering unit to render the particles according to the attributes of the particles (Guo, [0065] “rendering the firework particle based on the updated firework particle attribute values and the corresponding firework particle map group” Guo teaches rendering the firework particle based on the updating firework particle attribute values.
Regarding Claim 10 (Canceled).
Regarding Claim 11 (Currently amended), a combination of Guo and Sebastian discloses an electronic device (Guo, [0007] “an electronic device”), comprising: at least one processor and a memory [0007] “a memory and a processor”);
the memory storing computer-executable instructions (Guo, [0007] “a memory, configured to store computer-executable instructions”; the at least one processor executing the computer-executable instructions stored in the memory (Guo, [0007], a processor, configured to run computer-executable instructions”) to cause the electronic device to:
acquire effect resource data comprising a self-defined effect logic, the effect logic being used for specifying at least one effect processor and at least two associated effect processing units included in each effect processor;
generate the effect processor and the effect processing units according to the effect logic, each effect processor corresponding to a set of particles, and the effect processing units being used for processing the particles through a shader of a Graphic Processing Unit GPU;
according to an association relationship between the effect processing units included in the effect processor, sequentially invoke the effect processing units to process particles corresponding to the effect processor, to obtain an effect picture, wherein each of the effect processing units is used for performing one of the following types of processing: generating the particles, updating attributes of the particles, and rendering the particles according to the attributes of the particles, and wherein the particles are display objects of geometry.
wherein the effect resource data further comprises shader information
corresponding to each of the effect processing units, and the shader information is updated to dynamically distribute shaders and modify processing logic of the particles, the processing logic comprising at least one of generation logic of the particles, update logic of the particles, or rendering logic of the particles
Claim 11 is substantially similar to claim 1 is rejected based on similar analyses.
Regarding Claim 12-13. (Canceled).
Regarding Claim 14 (Currently amended) a combination of Guo and Sebastian discloses a computer program product (Guo, [0189] “one or more computer program products”) being stored in a non-transient computer storage medium and comprising machine-executable instructions which (Guo, [0189] “One or more computer-readable instructions may be stored on the computer-readable storage medium”), when executed by a device, cause the device to perform acts comprising a computer program for:
acquiring effect resource data comprising a self-defined effect logic, the effect logic being used for specifying at least one effect processor and at least two associated effect processing units included in each effect processor;
generating the effect processor and the effect processing units according to the effect logic, each effect processor corresponding to a set of particles, and the effect processing units being used for processing the particles through a shader of a Graphic Processing Unit GPU;
according to an association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor, to obtain an effect picture, wherein each of the effect processing units is used for performing one of the following types of processing:
generating the particles, updating attributes of the particles, and rendering the particles according to the attributes of the particles, and wherein the particles are display objects of geometry;
wherein the effect resource data further comprises shader information
corresponding to each of the effect processing units, and the shader information is updated to dynamically distribute shaders and modify processing logic of the particles, the processing logic comprising at least one of generation logic of the particles, update logic of the particles, or rendering logic of the particles
Claim 14 is substantially similar to claim 1 is rejected based on similar analyses.
Regarding Claim 15, a combination of Guo and Sebastian discloses the electronic device of claim 11, wherein the at least one effect processor includes a first effect processor and a second effect processor which are associated with each other and respectively used to process at least one group of first particles and at least one group of second particles which are associated with each other;
the effect picture includes a first sub-picture and a second sub-picture, and according to an association relationship between effect processing units included in the first effect processor, sequentially invoking the effect processing units to process the particles corresponding to the effect processor to obtain the effect picture comprises:
according to an association relationship between the effect processing units included in the first effect processor, sequentially invoking the effect processing units to process the first particles to obtain the first sub-picture;
taking particle identifiers of the first particles as an input to the second effect processor, and according to the association relationship between effect processing units included in the second effect processor, sequentially invoking the effect processing units to process the second particles to obtain the second sub-picture.
Claim 15 is substantially similar to claim 2 is rejected based on similar analyses.
Regarding Claim 16, a combination of Guo and Sebastian discloses the electronic device of claim 15, wherein the effect processing units in the effect processor include a particle processing unit and a particle rendering unit;
taking particle identifiers of the first particles as an input to the second effect processor, and according to the association relationship between the effect processing units included in the second effect processor, sequentially invoking the effect processing units to process the second particles to obtain the second sub-picture comprises:
taking particle identifiers of the first particles as an input to the particle processing unit, and invoking the particle processing unit to process the second particles, wherein the particle processing unit is used for performing one of the following types of processing: generating the second particles, and updating attributes of the second particles;
invoking the particle rendering unit to render the second particles according to the attributes of the particles to obtain the second sub-picture.
Claim 16 is substantially similar to claim 3 is rejected based on similar analyses.
Regarding Claim 17 (Currently amended), a combination of Guo and Sebastian discloses the electronic device of claim 11, wherein
upon invoking an effect processing unit according to the association relationship between the effect processing units included in the effect processor, uploading a corresponding shader according to the shader information; invoking the shader to process the particles to obtain the effect picture.
Claim 17 is substantially similar to claim 4 is rejected based on similar analyses.
Regarding Claim 18, a combination of Guo and Sebastian discloses the electronic device of claim 11, wherein the electronic device is configured to, before according to an association relationship between the effect processing units included in the effect processor, sequentially invoke the effect processing units to process particles corresponding to the effect processor, to obtain an effect picture, the electronic device is further configured to:
receive externally-input data sent by a Central Processing Unit (CPU), the externally-input data comprising at least one of: data extracted by the CPU from a user input instruction, and data obtained after converting the extracted data; and
wherein according to an association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor comprises:
take the externally-input data as an input to the effect processor, and according to the association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor to obtain the effect picture.
Claim 18 is substantially similar to claim 5 is rejected based on similar analyses.
Regarding Claim 19, a combination of Guo and Sebastian discloses the electronic device of claim 18, wherein the effect processing unit in the effect processor comprises: a particle processing unit and a particle rendering unit, and wherein taking the externally-input data as an input to the effect processor, and according to the association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor to obtain the effect picture comprises:
taking the externally-input data as an input to the particle processing units, and invoking the particle processing units to process the particles, wherein the particle processing unit is used for performing one of: generating the particles, or updating attributes of the particles;
invoking the particle rendering unit to render the particles according to the attributes of the particles to obtain the effect picture.
Claim 19 is substantially similar to claim 6 is rejected based on similar analyses.
Regarding Claim 20, a combination of Guo and Sebastian discloses the electronic device of claim 11, wherein the effect processing unit in the effect processor comprises: a particle generating unit, a particle updating unit and a particle rendering unit, and wherein according to an association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor comprises:
invoking the particle generating units to generate the particles; invoking the particle updating unit to update attributes of the particles;
invoking the particle rendering unit to render the particles according to the attributes of the particles.
Claim 20 is substantially similar to claim 7 is rejected based on similar analyses.
Regarding Claim 23, a combination of Guo and Sebastian discloses the computer program product of claim 14, wherein the at least one effect processor includes a first effect processor and a second effect processor which are associated with each other and respectively used to process at least one group of first particles and at least one group of second particles which are associated with each other; the effect picture includes a first sub-picture and a second sub-picture, and according to an association relationship between effect processing units included in the first effect processor, sequentially invoking the effect processing units to process the particles corresponding to the effect processor to obtain the effect picture comprises:
according to an association relationship between the effect processing units included in the first effect processor, sequentially invoking the effect processing units to process the first particles to obtain the first sub-picture;
taking particle identifiers of the first particles as an input to the second effect processor, and according to the association relationship between effect processing units included in the second effect processor, sequentially invoking the effect processing units to process the second particles to obtain the second sub-picture.
Claim 23 is substantially similar to claim 2 is rejected based on similar analyses.
Claims 8, 9, 21, 22 are rejected under 35 U.S.C. 103 as being unpatentable by Guo et al. (U.S. 2024/0037830 A1) in view of Sebastian et al. (U.S. 2017/0091983 A1) and further in view of Zhong et al. (U.S. 2023/0330433 A1).
Regarding Claim 8, the method of claim 1, a combination of Guo and Sebastian does not explicitly teach wherein before according to an association relationship between the effect processing units included in the effect processor, sequentially invoking the effect processing units to process particles corresponding to the effect processor to obtain an effect picture, the method further comprises:
determining a number of threads corresponding to the effect processor according to a number of particles corresponding to the effect processor;
starting threads according to the number of threads, each of the threads being used for executing the processing of a particle corresponding to the effect processor.
However, Zhong teaches determining a number of threads corresponding to the effect processor according to a number of particles corresponding to the effect processor (Zhong, [0022] “the particles may be simulated by process parallelism and thread parallelism of a central processing unit (CPU), a calculation process of the process parallelism and thread parallelism of the CPU is as follows. Firstly, the system obtains a number of processes or threads to obtain a numerical value n; then, the system equally divides particles required to be simulated into n parts” and [0110] “Effect of each of the process parallelism and the thread parallelism is limited to a number of cores of CPU…and the thread parallelism may increase speed by 4 times (process) or 8 times (thread) at most” Zhong teaches determining a number of threads (n = 4, or 8) corresponding to the effect processor (CPU) according to a number of particles corresponding to n threads;
starting threads according to the number of threads, each of the threads being used for executing the processing of a particle corresponding to the effect processor (Zhong, [0022] Firstly, the system obtains a number of processes or threads to obtain a numerical value n; then, the system equally divides particles required to be simulated into n parts” Zhong teaches starting threads according to the number of threads (n), each thread being used for executing a particle corresponding to the CPU.
Guo, Sebastian and Zhong are combinable because they are from the same field of endeavor, system and method for image processing and try to solve similar problems. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made for modifying the method of Guo to combine with determining a number of threads corresponding to the effect processor (CPU) (as taught by Zhong) in order to determine a number of threads corresponding to the effect processor (CPU) according to a number of particles because Sebastian can provide determining a number of threads (n) corresponding to the effect processor (CPU) according to a number of particles corresponding to n threads (Zhong, [0022], [0110]). Doing so, it may provide simulation tasks of different source particles are allocated to different threads and summarization is performed after a calculation task of each of the threads is completed, to obtain a final calculation result (Zhong, [0020]).
Regarding Claim 9, the method of claim 8, a combination of Guo and Sebastian does not explicitly teach wherein the number of threads is a multiple of the number of threads included by a thread group.
However, Zhong teaches the number of threads is a multiple of the number of threads included by a thread group (Zhong, [0022], Firstly, the system obtains a number of processes or threads to obtain a numerical value n; then, the system equally divides particles required to be simulated into n parts” and [0110] “Effect of each of the process parallelism and the thread parallelism is limited to a number of cores of CPU…and the thread parallelism may increase speed by 4 times (process) or 8 times (thread) at most” Zhong teaches the number of threads is included by a thread group, e.g., group of 4 numbers or group of 8 numbers.
Guo, Sebastian and Zhong are combinable see rationale in claim 8.
Regarding Claim 21 a combination of Guo, Sebastian and Zhong discloses the electronic device of claim 11, wherein before according to an association relationship between the effect processing units included in the effect processor, the electronic device is configured to sequentially invoking the effect processing units to process particles corresponding to the effect processor to obtain an effect picture, the electronic device is configured to:
determine a number of threads corresponding to the effect processor according to a number of particles corresponding to the effect processor;
start threads according to the number of threads, each of the threads being used for executing the processing of a particle corresponding to the effect processor.
Claim 21 is substantially similar to claim 8 is rejected based on similar analyses.
Regarding Claim 22, a combination of Guo, Sebastian and Zhong discloses the electronic device of claim 21, wherein the number of threads is a multiple of the number of threads included by a thread group.
Claim 22 is substantially similar to claim 9 is rejected based on similar analyses.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to KHOA VU whose telephone number is (571)272-5994. The examiner can normally be reached 8:00- 4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at 571-272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/KHOA VU/Examiner, Art Unit 2611
/KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611