Prosecution Insights
Last updated: April 19, 2026
Application No. 18/772,076

SPLIT-SCREEN EFFECT GENERATING METHOD AND APPARATUS, DEVICE, AND MEDIUM

Non-Final OA §103
Filed
Jul 12, 2024
Examiner
SHEN, SAMUEL
Art Unit
2179
Tech Center
2100 — Computer Architecture & Software
Assignee
BEIJING ZITIAO NETWORK TECHNOLOGY CO., LTD.
OA Round
5 (Non-Final)
40%
Grant Probability
Moderate
5-6
OA Rounds
3y 5m
To Grant
67%
With Interview

Examiner Intelligence

Grants 40% of resolved cases
40%
Career Allow Rate
48 granted / 119 resolved
-14.7% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
25 currently pending
Career history
144
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
58.7%
+18.7% vs TC avg
§102
12.2%
-27.8% vs TC avg
§112
21.0%
-19.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 119 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 3/16/2026 has been entered. Response to Amendment The rejections under 35 U.S.C. §112(b) of claims 1, 3-11, 13-16, and 18-20 are withdrawn in view of the amendments to the independent claims. Examiner acknowledges the amendments to the claims received on 3/16/2026 have been entered, and that no new matter has been added. Response to Arguments Argument 1: Applicant argues on page 20 in the filing on 3/16/2026 that the cited prior art does not teach “presenting multiple sub-screens that display effect rendering results from the same image or video” in claim 1. Response to Argument 1: Respectfully, Grinstein discloses the above. Grinstein disclsoes multiple sub-screens with Fig. 33 (a sub-screen with a leg, and a sub-screen with the rest of the body) and with Fig. 37-39 (a sub-screen with a head, a sub-screen with a torso, and a sub-screen with the rest of the body). These display the “swing” rendering results in the side panel. The claims require that two sub-screens render results based on the same image. The sub-screens of Grinstein are all based on the same “human.x” image. See rejection below for more details. Argument 2: Applicant argues on page 20 that the cited prior art does not teach “the concept of applying common effect materials once across multiple sub-screens. In Grinstein, materials such as “Swing” are redundantly loaded at multiple levels of the hierarchy, leading to repeated processing and rendering” in claim 1. Response to Argument 2: Respectfully, Grinstein discloses the above. Grinstein [Col 55 25-31, Fig. 34] discloses that any item loaded into a parent node once, will cause all child nodes to have the same properties. See rejection below for more details. Argument 3: Applicant argues on page 22-23 that the cited prior art Sykes and Krueger do “not consider setting different effect display orders for different materials,” in claims 4-5. Response to Argument 3: Argument 3 is moot in view of new grounds of rejection. The scope of the amendment has changed and new art has been applied. This meets the claim limitations as currently claimed, and Applicant's Argument 3 filed on 3/16/2026 is/are moot in view of new grounds of rejection necessitated by the applicant’s amendment; Applicant’s Arguments 1-2 are not persuasive. Applicant’s remaining statements regarding the remaining independent and dependent claims are moot or not persuasive for the reasons stated above. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 3, 5-11, 13-16, and 18-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Grinstein et al., Patent Number US 6714201 B1 (hereinafter “Grinstein”), in view of Mallick et al., Patent Number US 9058765 B1 (hereinafter “Mallick”). Claim 1: Grinstein teaches “split-screen effect generation method, comprising: displaying a split-screen model in a material operation area in an effect production interface (i.e. toolbar shown in FIG. 33 includes four additional buttons 518 through 521 related to motions… The add-motion button 518… add-sound button 524 [Grinstein Col 54 lines 26-28, 52, Fig. 33, 34, 37-39] note: Grinstein Fig. 34, 37-39 shows an effect production interface. The material operation area is the whole screen of Fig. 33-34, 37-39. Instant specification [0038] states “The material operation area is an area in the effect production interface, which is used to provide interactive operation functions and display the materials,” e.g. Grinstein’s motion and sound buttons. There is a hierarchy tree on the left side of material operation area. In Fig. 34, the split screen model’s name “human.x” is displayed in this hierarchy. Instant specification [0049] states that “the split-screen model at the upper display level is the parent node,” thus the Examiner interprets “model” as a node, and that “split-screen model” is a parent node (see also instant Fig. 2 “split-screen model” at the top of the hierarchy)--e.g. Grinstein’s “human.x” is a split-screen model/parent node in the hierarchy), wherein the split-screen model is a preset model that splits a screen into at least two sub-screens (Grinstein Fig. 33-34, 37-39 shows that the “human.x” model is capable of displaying a dividing split. For example in Fig. 34, the “human.x” model of a person is displayed, and a bounding box splits the screen into a leg portion, and another portion with the rest of the person), wherein the at least two sub-screens present effect rendering results based on a same image or video (It is noted that the term “based” is broad and means used as a point from which something can develop. Grinstein Fig. 33-34, 37-39 shows that at least two sub-screens are rendered based on the same image, at the same time. For example, Fig. 34 shows that the larger human model of a person is displayed, and a bounding box splits the screen into a leg portion, and another portion with the rest of the person. The images are both “based” on, or developed from, the same human image); in response to a triggering operation on a sub-screen included in the split-screen model (i.e. In the scene-view window 503 a user can click on a portion of the model 502 to be selected [Grinstein Col 55 lines 43-52, Fig. 34] note: in Fig. 34 user uses cursor 542 and clicks on the leg of model “human.x,” which is a model capable of split-screens), displaying a target sub-screen model corresponding to the triggering operation (i.e. In the scene-view window 503 a user can click on a portion of the model 502 to be selected… the selected node 536AA is highlighted in the tree-view window 530 to indicate its selection [Grinstein Col 55 lines 43-52, Fig. 34] note: in Fig. 34, user clicks on the leg, and the leg model/node “S_Leg_UR Parent 536AA ” sub-model/node is determined) at a subordinate display level of the split-screen model (i.e. the selected node 536AA is highlighted in the tree-view window 530 to indicate its selection [Grinstein Col 55 lines 43-52, Fig. 34] note: model/node 336AA in the hierarchy is displayed at a subordinate level of “human.x” and “S_Body_B Parent” as a split-screen parent model/node), wherein a sub-screen model is a model corresponding to a split sub-screen (i.e. In the scene-view window 503 a user can click on a portion of the model 502 to be selected… As a result, the bounding box 544 is drawn around that node [Grinstein Col 55 lines 43-52, Fig. 34] note: Fig. 34 shows the leg model is split into its own sub-screen portion with bounding box 544); in response to a first material loading operation, loading a common effect material into the material operation area (i.e. add-behavior option 556 allows a user to define or select and then add a given behavior, of the type described above in section 6.2.6 [of Grinstein], to the currently selected motion [Grinstein Col 56 lines 22-24, Fig. 34, 38]… swing button 516, an instance of an oscillating rotation motion class can be created [Grinstein Col 54 lines 7-8] note: clicking a button to add a “swing” behavior as a material loading operation. In Fig. 34, “Swing” material 540 is displayed subordinate to “S_Leg UR Parent” 536AA, which is subordinate to “S_Body_B Parent”) and displaying it at the subordinate display level of the split-screen model (i.e. child nodes that are to have their position defined relative to such a parent node are depended. Any motion 540 place[d] between a parent node and such a parent location node in the tree-view graph will cause all of the child nodes whose positions are defined relative to the parent location node to move as a function of such motions [Grinstein Col 55 25-31, Fig. 34] note: placing fig. 34’s “Swing” motion under “S_Body_B Parent” will cause the “Swing” motion to be applied to “S_Body_B Parent” itself, as well as to sub-screen “S_Leg_UR Parent,” and “S_Leg_UL Parent.” In Fig. 34, “Swing” material 540 is displayed with parent node “S_Body_B Parent,” which is at a subordinate level different than (other than) target sub-screen model “S_Leg_UR Parent”); wherein the common effect material refers to a same effect material that is to be shared by respective target sub-screen models and is to be loaded in each target sub-screen model (i.e. child nodes that are to have their position defined relative to such a parent node are depended. Any motion 540 place between a parent node and such a parent location node in the tree-view graph will cause all of the child nodes whose positions are defined relative to the parent location node to move as a function of such motions [Grinstein Col 55 25-31, Fig. 34] note: the subordinates of “S_Body_B Parent,” including “S_Leg_UL Parent,” and “S_Leg_UR Parent,” both share the “Swing” effect), wherein the common effect material and the target sub-screen model are at a same display level in the material operation area (i.e. Each such parent node has associated with it a parent location node 538 indicated by a box with a diagonal line through it [Grinstein Col 55 lines 22-24] note: Grinstein Fig. 34 shows common effect material “Swing” 540 at the same display level as parent node location 538); PNG media_image1.png 51 244 media_image1.png Greyscale in response to a second material loading operation, loading a proprietary effect material into the material operation area and displaying it at a subordinate display level of the target sub-screen model, wherein the proprietary effect material refers to an effect material exclusive to the target sub-screen model (i.e. in the example of FIG. 40, if a swing motion was applied to the selected thigh model node 536EE, the thigh model would become disconnected from the lower leg parent node 536AF, the lower leg model 536FF, the foot parent node 536AG, and any nodes which depend from the foot parent node, because they would not be linked to its motion [Grinstein Col 57 lines 20-26, Fig. 40] note: loading another swing motion at a thigh node 536EE, would load “Swing” at its subordinate level, and its motion remains exclusive to the thigh); generating a rendering link (From instant specification 0084-0085, a “rendering link” appears to be an ordered list. Grinstein paragraphs and table in Col 37 lines 35-50 teach rendering complex motions, which requires a specific order of basic motions) based on a first material information about the common effect material (Grinstein’s swing motion in that is applied to all child nodes. See [Grinstein Col 55 25-31, Fig. 34], cited above) and a second material information about the proprietary effect material (Grinstein’s swing motion in that is NOT applied to all child nodes. See [Grinstein Col 57 lines 20-26, Fig. 40], cited above);… generating a split-screen effect based on the split-screen model, the common effect material, the target sub-screen model, and the proprietary effect material, wherein a rendering control for the generated split-screen effect is performed (i.e. This will enable a user to vary the parameters defining the selected motion and to interactively see their effects upon the selected motion, as such changes are made, in the scene window 503 [Grinstein Col 56, lines 6-10] note: any changes made above are displayed immediately).” Grinstein is silent regarding “wherein the rendering link is used to record a rendering order for the common effect material based on the first material information about the common effect material and a rendering order for the proprietary effect material based on the second material information about the proprietary effect material, and wherein each of the first material information and the second material information comprises a material type and a material arrangement position, or the material arrangement position;” Mallick teaches “generating a rendering link (i.e. When all layers are combined into their pre-defined order, the resulting image may be displayed as the final makeover image [Mallick Col 19 lines 25-45] note: from instant specification 0084-0085, “rendering link” appears to be an ordered list) based on a first material information about the common effect material (i.e. skin layers (foundation, bronzer, concealer, and blush, for example,) would be rendered as the lowest layers (applied first to the base image.) [Mallick Col 19 lines 25-45]) and a second material information about the proprietary effect material (i.e. Lip layers, such as lip liner, lipstick, and lip gloss, may be applied in order. Eye layers, such as liner, shadow, and mascara, for example, may be applied before or after lip layers, as the two sets are conceptually independent. Hair and accessory layers may be applied last [Mallick Col 19 lines 25-45]); wherein the rendering link is used to record a rendering order for the common effect material based on the first material information about the common effect material (i.e. skin layers (foundation, bronzer, concealer, and blush, for example,) would be rendered as the lowest layers (applied first to the base image.) [Mallick Col 19 lines 25-45]) and a rendering order for the proprietary effect material based on the second material information about the proprietary effect material (i.e. Lip layers, such as lip liner, lipstick, and lip gloss, may be applied in order. Eye layers, such as liner, shadow, and mascara, for example, may be applied before or after lip layers, as the two sets are conceptually independent. Hair and accessory layers may be applied last [Mallick Col 19 lines 25-45]) (i.e. When all layers are combined into their pre-defined order, the resulting image may be displayed as the final makeover image [Mallick Col 19 lines 25-45]), and wherein each of the first material information and the second material information comprises a material type (i.e. skin layers (foundation, bronzer, concealer, and blush, for example,) [Mallick Col 19 lines 25-45] note: material type for first material information) (i.e. Lip layers, such as lip liner, lipstick, and lip gloss, may be applied in order. Eye layers, such as liner, shadow, and mascara, for example [Mallick Col 19 lines 25-45] note: material type for second material information) and a material arrangement position (Mallick Col 19 lines 25-45 discloses skin layers, which is a first/common material positioned all over the face. As well as lipsick, lipgloss, and eye liner, shadow, and mascara, which are second/proprietary materials, which are positioned at specific locations on the face, e.g. on the lip position, or on the eye position), or the material arrangement position; and generating a… effect based on… the common effect material, the target sub-screen model, and the proprietary effect material, wherein a rendering control for the generated… effect is performed based on the rendering link (i.e. When all layers are combined into their pre-defined order, the resulting image may be displayed as the final makeover image [Mallick Col 19 lines 25-45]).” It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the invention/combination of Grinstein to include the feature of having the ability to order rendering as disclosed by Mallick. One would have been motivated to do so, before the effective filing date of the invention because it provides the benefit to ensure the correct order of rendering, so items aren’t displayed out of order. Items displayed out of order would be covered up, or missing to the user, or look like visual errors. Ensuring the correct order reduces these visual errors, and ensures processing items on the screen are efficiently displayed (e.g. not re-rendered due to being covered up, or error-corrected). Claim 3: Grinstein and Mallick teach all the limitations of claim 1, above. Grinstein teaches “wherein, before generating the split-screen effect based on the split-screen model, the common effect material, the target sub-screen model, and the proprietary effect material (i.e. button 505 opens a previously saved scene, model, motion, or package of motions [Grinstein Col 53 lines 25-26] note: loading a previously saved package indicates that the package was generated before), the method further comprises: rendering the common effect material (Grinstein’s swing motion in that is applied to all child nodes. See [Grinstein Col 55 25-31, Fig. 34], cited in claim 1, above) and the proprietary effect material respectively (Grinstein’s swing motion in that is NOT applied to all child nodes. See [Grinstein Col 57 lines 20-26, Fig. 40], cited in claim 1, above).” Mallick teaches “rendering the common effect material (i.e. skin layers (foundation, bronzer, concealer, and blush, for example,) would be rendered as the lowest layers (applied first to the base image.) [Mallick Col 19 lines 25-45]) and the proprietary effect material respectively (i.e. Lip layers, such as lip liner, lipstick, and lip gloss, may be applied in order. Eye layers, such as liner, shadow, and mascara, for example, may be applied before or after lip layers, as the two sets are conceptually independent. Hair and accessory layers may be applied last [Mallick Col 19 lines 25-45]) (i.e. When all layers are combined into their pre-defined order, the resulting image may be displayed as the final makeover image [Mallick Col 19 lines 25-45]) based on the rendering link, to generate a target rendering result (i.e. When all layers are combined into their pre-defined order, the resulting image may be displayed as the final makeover image [Mallick Col 19 lines 25-45]); and displaying the target rendering result in an effect presentation area in the effect production interface (i.e. When all layers are combined into their pre-defined order, the resulting image may be displayed as the final makeover image [Mallick Col 19 lines 25-45]).” One would have been motivated to combine Grinstein and Mallick, before the effective filing date of the invention because it provides the benefit to ensure the correct order of rendering, so items aren’t displayed out of order. Items displayed out of order would be covered up, or missing to the user, or look like visual errors. Ensuring the correct order reduces these visual errors, and ensures processing items on the screen are efficiently displayed (e.g. not re-rendered due to being covered up, or error-corrected). Claim 5: Grinstein and Mallick teach all the limitations of claim 1, above. Grinstein teaches a first material information about the common effect material (Grinstein’s swing motion in that is applied to all child nodes. See [Grinstein Col 55 25-31, Fig. 34], cited in claim 1, above), and a second material information about the proprietary effect material (Grinstein’s swing motion in that is NOT applied to all child nodes. See [Grinstein Col 57 lines 20-26, Fig. 40], cited in claim 1, above). Grinstein is silent regarding “wherein, when it is determined that the material type of the proprietary effect material matches a preset material type, the generating the rendering link based on the first material information about the common effect material and the second material information about the proprietary effect material, comprises: generating the rendering link based on the material arrangement position in the first material information about the common effect material and the material arrangement position in the second material information about the proprietary effect material, wherein the rendering link is used to record a third rendering order for the proprietary effect material based on the material arrangement position in the second material information, and a fourth rendering order for the common effect material based on the material arrangement position in the first material information; wherein the third rendering order takes precedence over the fourth rendering order.” Mallick teaches “wherein, when it is determined that the material type of the proprietary effect material matches a preset material type (i.e. Hair and accessory layers may be applied last in one or more embodiments of the invention [Mallick Col 19 lines 25-45] note: if the material matches hair accessories, then the material is displayed last, or on top layer, which takes precedence over all materials, including common/shared materials), the generating the rendering link based on the first material information about the common effect material and the second material information about the proprietary effect material, comprises: generating the rendering link based on the material arrangement position in the first material information about the common effect material and the material arrangement position in the second material information about the proprietary effect material, wherein the rendering link is used to record a third rendering order for the proprietary effect material based on the material arrangement position in the second material information, and a fourth rendering order for the common effect material based on the material arrangement position in the first material information; wherein the third rendering order takes precedence over the fourth rendering order (i.e. Hair and accessory layers may be applied last in one or more embodiments of the invention [Mallick Col 19 lines 25-45] note: if the material matches hair accessories, then the material is displayed last, or on top layer, which takes visual precedence over all materials, including common/shared materials).” One would have been motivated to combine Grinstein and Mallick, before the effective filing date of the invention because it provides the benefit to ensure the correct order of rendering, so items aren’t displayed out of order. Items displayed out of order would be covered up, or missing to the user, or look like visual errors. Ensuring the correct order reduces these visual errors, and ensures processing items on the screen are efficiently displayed (e.g. not re-rendered due to being covered up, or error-corrected). Claim 6: Grinstein and Mallick teach all the limitations of claim 1, above. Grinstein teaches a first material information about the common effect material (Grinstein’s swing motion in that is applied to all child nodes. See [Grinstein Col 55 25-31, Fig. 34], cited in claim 1, above), and a second material information about the proprietary effect material (Grinstein’s swing motion in that is NOT applied to all child nodes. See [Grinstein Col 57 lines 20-26, Fig. 40], cited in claim 1, above). Grinstein is silent regarding “wherein, when it is determined that the material type of the proprietary effect material does not match a preset material type, the generating the rendering link based on the first material information about the common effect material and the second material information about the proprietary effect material, comprises: generating the rendering link based on the material arrangement position in the first material information about the common effect material and the material arrangement position in the second material information about the proprietary effect material, wherein the rendering link is used to record a fifth rendering order for the common effect material based on the material arrangement position in the first material information, and a sixth rendering order for the proprietary effect material based on the material arrangement position in the second material information; wherein the fifth rendering order takes precedence over the sixth rendering order.” Mallick teaches “wherein, when it is determined that the material type of the proprietary effect material does not match a preset material type (i.e. Hair and accessory layers may be applied last in one or more embodiments of the invention [Mallick Col 19 lines 25-45]… Rendering function 705 may be configured to blend the skin tone associated with the face with the makeover item… for example concealer [Mallick Col 26 lines 64 - Col 27 lines 1-12] note: if the material does not match hair accessories e.g. concealer, then the material is not displayed last, or on the top layer, which means it does not takes precedence), the generating the rendering link based on the first material information about the common effect material and the second material information about the proprietary effect material, comprises: generating the rendering link based on the material arrangement position in the first material information about the common effect material and the material arrangement position in the second material information about the proprietary effect material, wherein the rendering link is used to record a fifth rendering order for the common effect material based on the material arrangement position in the first material information, and a sixth rendering order for the proprietary effect material based on the material arrangement position in the second material information; wherein the fifth rendering order takes precedence over the sixth rendering order (i.e. Rendering function 105 may be configured to blend the skin tone with the makeover item for example hair having an associated hairstyle and hair color at a boundary between the face and the hair [Mallick Col 26 lines 64 - Col 27 lines 1-12] Note: skin tone/concealer is blended with the hair makeover item. The skin tone/concealer/common material is displayed on top, as a precedence. The hair is no longer displayed last--no longer displayed with precedence).” One would have been motivated to combine Grinstein and Mallick, before the effective filing date of the invention because it provides the benefit to ensure the correct order of rendering, so items aren’t displayed out of order. Items displayed out of order would be covered up, or missing to the user, or look like visual errors. Ensuring the correct order reduces these visual errors, and ensures processing items on the screen are efficiently displayed (e.g. not re-rendered due to being covered up, or error-corrected). Claim 7: Grinstein and Mallick teach all the limitations of claim 1, above. Grinstein teaches “wherein, before generating the split- screen effect based on the split-screen model, the common effect material and the target sub-screen model, the method further comprises: in response to a material moving operation, moving a target effect material to a target position corresponding to the material moving operation (i.e. If the user selects to add a motion instance to a node… by having dragged a motion to a selected node [Grinstein Col 62 lines 35-41]), and determining target material information corresponding to the moved target effect material based on the target position (i.e. then step 654 calls associateMotionWithNode function 778 [Grinstein Col 62 lines 35-41] note: the motion is associated with the node, including material information of the position of the node, e.g. swing motion upon the leg node); wherein, the target effect material is the common effect material or an exclusive effect material, and the target position is a material position subordinate to the split-screen model or a material position subordinate to the target sub-screen model (i.e. add-behavior option 556 allows a user to define or select and then add a given behavior, of the type described above in section 6.2.6 [of Grinstein], to the currently selected motion [Grinstein Col 56 lines 22-24, Fig. 34, 38]… swing button 516, an instance of an oscillating rotation motion class can be created [Grinstein Col 54 lines 7-8] note: adding a “swing” behavior as a material loading operation. In Fig. 34, “Swing” material 540 is displayed subordinate to “S_Leg UR Parent” 536AA, which is subordinate to “human.x”).” Claim 8: Grinstein and Mallick teach all the limitations of claim 7, above. Grinstein teaches “, the determining target material information corresponding to the moved target effect material based on the target position, comprises: establishing an association relationship between the moved target effect material and a target model corresponding to the target position; wherein the target model is the split-screen model or the target sub-screen model (i.e. If the user selects to add a motion instance to a node… by having dragged a motion to a selected node… then step 654 calls associateMotionWithNode function 778 [Grinstein Col 62 lines 35-41] note: the motion is associated with the node, including material information of the position of the node, e.g. swing motion upon the leg node [Grinstein Col 62 lines 35-41]); determining a target display level corresponding to the moved target effect material based on the display level corresponding to the target model (i.e. add-behavior option 556 allows a user to define or select and then add a given behavior, of the type described above in section 6.2.6 [of Grinstein], to the currently selected motion [Grinstein Col 56 lines 22-24, Fig. 34, 38] note: adding a “swing” behavior as a material loading operation. In Fig. 34, “Swing” material 540 is displayed subordinate to “S_Leg UR Parent” 536AA, which is subordinate to “human.x”); determining the moved target effect material to be the common effect material or the proprietary effect material based on the target display level (i.e. in the example of FIG. 40, if a swing motion was applied to the selected thigh model node 536EE, the thigh model would become disconnected from the lower leg parent node 536AF, the lower leg model 536FF, the foot parent node 536AG, and any nodes which depend from the foot parent node, because they would not be linked to its motion [Grinstein Col 57 lines 20-26, Fig. 40] note: whether the effect is common or proprietary depends on the node where the “swing” effect is dragged, and each node has a target display level, so it is also based on the node’s target display level), and determining a target material arrangement position corresponding to the moved target effect material based on the target position (i.e. If the user selects to add a motion instance to a node… by having dragged a motion to a selected node [Grinstein Col 62 lines 35-41]).” Claim 9: Grinstein and Mallick teach all the limitations of claim 1, above. Grinstein teaches “wherein, before generating the split- screen effect based on the split-screen model, the common effect material and the target sub-screen model, the method further comprises: determining component types and component amount of rendering camera components in the split-screen model (i.e. motions could be applied to cameras [Grinstein Col 59 line 16-21, Fig. 45] note: “cameras” indicate at least 2 cameras, which is an amount of camera components. Fig. 45, element 588 shows a camera node component type. This is in the left pane of a split screen image editor), based on a material type and material amount of the common effect material, to update the split-screen model (Grinstein Fig. 45 shows a plurality of material types and amounts subordinate to the camera node, to be displayed on the screen).” Claim 10: Grinstein and Mallick teach all the limitations of claim 1, above. Grinstein teaches “wherein, after in response to the triggering operation on the sub-screen included in the split-screen model, displaying the target sub-screen model corresponding to the triggering operation at the subordinate display level of the split-screen model, the method further comprises: in response to a triggering operation on the target sub-screen model, displaying a parameter setting area corresponding to the target sub-screen model in the effect production interface (i.e. FIG. 42 illustrates the motion parameter window 558 which will be displayed for a selected node if the user clicks on the show-parameters button 550 shown in FIGS. 33 and 42 [Grinstein Col 57 lines 46-49, Fig. 33, 42]); in response to a parameter setting operation on a target parameter displayed in the parameter setting area, determining a target parameter value, and updating the target sub- screen model based on the target parameter value (i.e. if a user selects to change a parameter of a motion instance, such as by use of a show-parameters window 558 of the type shown in FIG. 42, steps 656 and 658 call the API with the corresponding change to the motion instance [Grinstein Col 63 line 1-5, Fig. 42]).” Claim 11: Grinstein and Mallick teach an electronic device, comprising: a processor; a memory configured to store executable instructions; wherein, the processor is configured to read the executable instructions from the memory and execute the executable instructions (i.e. a CPU 812 which can execute computer instructions stored in the random access memory 806 [Grinstein Col 68 lines 47-48]) to implement the method of claim 1; therefore, it is rejected under the same rationale. Claim 13: Claim 13 is similar in content and in scope to claim 3, thus it is rejected under the same rationale. Claim 14: Claim 14 is similar in content and in scope to claim 7, thus it is rejected under the same rationale. Claim 15: Grinstein and Mallick teach all the limitations of claim 11, above. Grinstein teaches “wherein, the processor is configured to read the executable instructions from the memory and execute the executable instructions to further implement: before generating the split-screen effect based on the split-screen model, the common effect material and the target sub-screen model, determining component types and component amount of rendering camera components in the split-screen model (i.e. motions could be applied to cameras [Grinstein Col 59 line 16-21, Fig. 45] note: “cameras” indicate at least 2 cameras, which is an amount of camera components. Fig. 45, element 588 shows a camera node component type. This is in the left pane of a split screen image editor), based on a material type and material amount of the common effect material, to update the split-screen model (Grinstein Fig. 45 shows a plurality of material types and amounts subordinate to the camera node, to be displayed on the screen), or wherein, the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement: after in response to a triggering operation on a sub-screen included in the split-screen model, determining the target sub-screen model, and displaying the target sub-screen model at the subordinate display level of the split-screen model, in response to a triggering operation on the target sub-screen model, displaying a parameter setting area corresponding to the target sub-screen model in the effect production interface (i.e. FIG. 42 illustrates the motion parameter window 558 which will be displayed for a selected node if the user clicks on the show-parameters button 550 shown in FIGS. 33 and 42 [Grinstein Col 57 lines 46-49, Fig. 33, 42]); in response to a parameter setting operation on a target parameter displayed in the parameter setting area, determining a target parameter value, and updating the target sub- screen model based on the target parameter value (i.e. if a user selects to change a parameter of a motion instance, such as by use of a show-parameters window 558 of the type shown in FIG. 42, steps 656 and 658 call the API with the corresponding change to the motion instance [Grinstein Col 63 line 1-5, Fig. 42]).” Claim 16: Grinstein and Mallick teach a non-transitory computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, causes the processor (i.e. a CPU 812 which can execute computer instructions stored in the random access memory 806 [Grinstein Col 68 lines 47-48]) to implement the method of claim 1; therefore, it is rejected under the same rationale. Claim 18: Claim 18 is similar in content and in scope to claim 3, thus it is rejected under the same rationale. Claim 19: Claim 19 is similar in content and in scope to claim 7, thus it is rejected under the same rationale. Claim 20: Claim 20 is similar in content and in scope to claim 15, thus it is rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Frascati (US 20140184623 A1) listed on 892 is related to optimizing visual streams, re-ordering stream commands for rendering a plurality of targets. Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMUEL SHEN whose telephone number is (469)295-9169 and email address is samuel.shen@uspto.gov. The examiner can normally be reached Monday-Thursday, 7:00 am - 5:00 pm CT. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached on (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /S.S./Examiner, Art Unit 2179 /IRETE F EHICHIOYA/Supervisory Patent Examiner, Art Unit 2179
Read full office action

Prosecution Timeline

Jul 12, 2024
Application Filed
Sep 12, 2024
Non-Final Rejection — §103
Dec 23, 2024
Response Filed
Jan 17, 2025
Final Rejection — §103
Mar 27, 2025
Response after Non-Final Action
Apr 22, 2025
Request for Continued Examination
May 01, 2025
Response after Non-Final Action
May 27, 2025
Non-Final Rejection — §103
Aug 29, 2025
Response Filed
Dec 04, 2025
Final Rejection — §103
Feb 13, 2026
Response after Non-Final Action
Mar 16, 2026
Request for Continued Examination
Mar 19, 2026
Response after Non-Final Action
Mar 21, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12535945
UTILIZING MODULARIZED ACTION BLOCKS IN A GRAPHICAL USER INTERFACE TO GENERATE DIGITAL IMAGES WITH CUSTOM MODIFICATIONS
2y 5m to grant Granted Jan 27, 2026
Patent 12504949
MITIGATING LATENCY IN SPOKEN INPUT GUIDED SELECTION OF ITEM(S)
2y 5m to grant Granted Dec 23, 2025
Patent 12504872
METHOD FOR CONTROLLING FLEXIBLE DISPLAY AND ELECTRONIC DEVICE THEREOF
2y 5m to grant Granted Dec 23, 2025
Patent 12493447
METHODS, SYSTEMS, AND APPARATUS FOR PROVIDING COMPOSITE GRAPHICAL ASSISTANT INTERFACES FOR CONTROLLING CONNECTED DEVICES
2y 5m to grant Granted Dec 09, 2025
Patent 12436732
THE METHOD AND APPARATUS FOR CONTROLLING AUDIO DATA BY RECOGNIZING USER GESTURE AND POSITION USING MULTIPLE MOBILE DEVICES
2y 5m to grant Granted Oct 07, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
40%
Grant Probability
67%
With Interview (+26.3%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 119 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month