Prosecution Insights
Last updated: April 19, 2026
Application No. 18/398,719

IMAGE SPLICING METHOD AND ELECTRONIC DEVICE

Non-Final OA §103§112
Filed
Dec 28, 2023
Examiner
BEUTEL, WILLIAM A
Art Unit
2616
Tech Center
2600 — Communications
Assignee
BEIJING DAJIA INTERNET INFORMATION TECHNOLOGY CO., LTD.
OA Round
1 (Non-Final)
70%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
90%
With Interview

Examiner Intelligence

Grants 70% — above average
70%
Career Allow Rate
328 granted / 469 resolved
+7.9% vs TC avg
Strong +20% interview lift
Without
With
+20.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
28 currently pending
Career history
497
Total Applications
across all art units

Statute-Specific Performance

§101
9.9%
-30.1% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
22.0%
-18.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 469 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 9 and 18 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Regarding claim 9, the claim recites “the first number in the third splicing templates” (i.e. plural), in lines 7 and 12 of the claim, which lacks antecedent basis in the claim, or otherwise renders the claim unclear as to the intended scope. Claim 1, from which claim 9 depends, recites “a first number in the plurality of groups of splicing templates”, and previously states that each group has different number of images. Therefore, it appears that the first number applies to only a single group, but then claim 9 goes on to state “a first number in the third splicing templates.” The alternative way of reading claim 9 is that the determining step in lines 6 and 11 is intended to recite “determining a splicing template from the third splicing templates as the first splicing template, where a number of images for the determined splicing template matches the first number of images” and similarly for the fourth splicing templates in lines 11-13. In other words, the claim as currently drafted is unclear as to whether “the first number” is referring a first number in the third splicing templates, which lacks antecedent basis, or the first number is referring back to the “a first number in the plurality of groups” from the parent claim. Although functionally possibly the same result, the language is not clear and requires correction. Claim 18 has substantially the same issues of indefiniteness as claim 9 set forth above and is rejected based on the same rationale. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 10, 12, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Yadav et al. (US 2020/0143514 A1) in view of Stein et al. (US 2018/0357694 A1) and in further view of Quek et al. (US 2010/0223568 A1). Regarding claim 10, Yadav discloses: An electronic device, (Yadav, Abstract and ¶38: computing devices to implement collage system; ¶97: computing device 1402) comprising: One or more processors (Yadav, ¶¶98-99: computing device including processing system configured as processors); and A memory, configured to store one or more program codes executable by the one or more processors; Wherein the one or more processors, when loading and executing the one or more program codes, are caused to perform: (Yadav, ¶99: processor executable instructions; ¶100: computer readable storage media 1406 including memory; ¶107: software or program modules stored on computer-readable media accessed and executed by computing device 1402) Displaying an image splicing interface, the image splicing interface comprising a spliced-image previewing region, an image selecting region, and a template selecting region, wherein the spliced-image previewing region is configured to allow previewing a spliced image, (Yadav, Fig. 4 and ¶55: GUI 122, including template menu 402 – i.e. “template selecting region” – where templates are selected from collage templates 118, digital image set 202 in image queue 304, see ¶53 – i.e. “image selecting region”; ¶52: collage creation module 110 utilizes an image layout selection module 224 to select an image layout 226 for generating a digital collage 120a, generating digital collage 120a by placing digital image set 202 into frames of image layout 226; See Fig. 4, working canvas 302 – i.e. “spliced-image previewing region”) The image selecting region is configured to display a plurality of images in a target storage space, (Yadav, Fig. 4, element 202 – digital image set; ¶44: A user, for instance, interacts with the image editing GUI 122 to select images from the digital images 116, which are aggregated as the digital image set 202) And the template selecting region is configured to display a plurality of groups of splicing templates, (Yadav, Fig. 4 and ¶53: selecting collage template for generating digital collage, where the template menu 402 includes a template set 404 that represents instances of the collage templates 118 that can be applied to generate different digital collages using the digital image set 202; See Figs. 4 to 5 showing user selecting different templates from a group of templates) Determining an image to be spliced (Yadav, Fig. 4 and ¶53: selecting collage template for generating digital collage, where the template menu 402 includes a template set 404 that represents instances of the collage templates 118 that can be applied to generate different digital collages using the digital image set 202); and Displaying a first spliced image in the spliced-image previewing region based on the image to be spliced and a first splicing template, (Yadav, Fig. 4 and ¶53: selecting collage template for generating digital collage, where the template menu 402 includes a template set 404 that represents instances of the collage templates 118 that can be applied to generate different digital collages using the digital image set 202; See Figs. 4 to 5 showing user selecting different templates from a group of template; ¶56-57: generate digital collage using images from the digital image set 202 and displays the digital collage 120a within the working canvas 302 and a selected template) wherein the first splicing template is a splicing template that matches a first number in the plurality of groups of splicing templates, the first number being a number of the images to be spliced (Yadav, Fig. 4 and ¶56-57 discloses generating the collage using the images and the selected collage template – note Fig. 4 shows the template having 3 spaces used for 3 images in working area) Yadav does not explicitly disclose the determining of an image to be spliced in response to a select operation on at least one of the image sin the image selecting region. Stein, however, discloses: determining an image to be spliced in response to a select operation [of] at least one of the images in the image selecting region (Stein, Fig. 4 and ¶38: the user can select a photo from a photo collection 420 in the main design panel 410 to incorporate into or replace a photo in the product design 400) the template selecting region is configured to display a plurality of groups of splicing templates, each group of splicing templates corresponding to a different number of images, and each group of splicing templates comprising at least one splicing template (Stein, Fig. 5A; ¶¶39-41: product type selection panel includes a plurality of dynamic objects 515, 516, which are user selectable and moveable to main panel 410 using user input actions such as drag and drop or touch – Figs. 5A-5B show different product designs having different number of images, where user selection changes template for displaying images as shown from Fig. 5A to 5B with different number of images) Displaying a first spliced image in the spliced-image previewing region based on the image to be spliced and a first splicing template, wherein the first splicing template is a splicing template that matches a first number in the plurality of groups of splicing templates, the first number being a number of the images to be spliced (Stein, Figs. 5A and 5B and ¶48: automatically change between product designs corresponding to selection panel; ¶49: Features in the second product design 550 that are not specified by the second product type can be automatically selected or created (step 345) by the intelligent product design creation engine 230. Such features can include properties in product style and product layout, and the of selections photos. One or more photo(s) in the second product design 550 can be kept the same as the last product design in the main panel 410 after the directional movement 500 as shown in FIG. 5B. Alternatively, the photo(s) in the second product design 550 can be automatically updated by the intelligent product design creation engine 230 in accordance to the new product type illustrated by the dynamic object 515.) Both Yadav and Stein are directed to systems and methods for user interfaces for combining a plurality of images into user specified layouts. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, to include the interface control elements for changing the design layout for the plurality of images using different numbers of images as provided by Stein, using known electronic interfacing and programming techniques. The modification merely substitutes one menu for selecting a design layout for controlling the layout of images for another, yielding predictable results of incorporating the design menu for selecting different image layouts with different number of images in place of different types of layouts for selection. Moreover, the modification results in an improved user interface for designing the coordination and grouping of photos together into a single image layout by allowing for a more diverse set of image layouts for more diverse options for a user and providing a user better options for tailoring their aesthetic preferences. The only limitation not explicitly disclosed by Yadav modified by Stein is that the particular selection of the images is performed in response to a select operation “on” at least one of the images. This appears to require a specific type of input which, although likely obvious and well-known at the time of the invention, is not explicitly stated by Yadav modified by Stein. Quek discloses: Determining an image to be spliced in response to a select operation on at least one of the images in the image selecting region (Quek, Fig. 11 and ¶¶83-84 discloses a user interface for digital images provided as thumbnail images of the digital images, where the user can conveniently use a computer mouse to select an image symbol 1110a, drag it across the user interface, and dropped the image into the image receiving area 1125 (step 1720)) Yadav, Stein and Quek are directed to systems and methods for user interfaces for combining a plurality of images into user specified layouts. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, including the interface control elements for changing the design layout for the plurality of images using different numbers of images as provided by Stein, by further using the graphical user interface control to allow a user to select and place an image into the template as provided by Quek, using known electronic interfacing and programming techniques. The modification results in an improved user interface for designing the coordination and grouping of photos together into a single image layout by allowing for a more intuitive selection and control of the individual objects, resulting in an easier to user interface. Regarding claim 1, the device of claim 10 performs the method of claim 1 and therefore claim 1 is rejected based on the same rationale as claim 10 set forth above. Regarding claim 19, Yadav discloses: A non-transitory computer-readable storage medium storing one or more program codes, wherein the one or more program codes, when loaded and executed by a processor of an electronic device, cause the electronic device to perform operations. (Yadav, Abstract and ¶38: computing devices to implement collage system; ¶97: computing device 1402; ¶¶98-99: computing device including processing system configured as processors; ¶99: processor executable instructions; ¶100: computer readable storage media 1406 including memory; ¶107: software or program modules stored on computer-readable media accessed and executed by computing device 1402) Further regarding claim 19, the operations perform the same method as claim 1, and therefore claim 19 is further rejected based on the same rationale as claim 1 set forth above. Regarding claim 12, Yadav modified by Stein further discloses: In response to a user selection, replacing an image already existent at any position of the first spliced image with an image corresponding to the user selection (Stein, Fig. 4 and ¶38: the user can select a photo from a photo collection 420 in the main design panel 410 to incorporate into or replace a photo in the product design 400) Both Yadav and Stein are directed to systems and methods for user interfaces for combining a plurality of images into user specified layouts. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, including the interface control elements for changing the design layout for the plurality of images using different numbers of images and including the technique for easily editing images in the template as provided by Stein, using known electronic interfacing and programming techniques. The modification results in an improved user interface for designing the coordination and grouping of photos together into a single image layout by allowing for a more diverse set of image layouts for more diverse options for a user and providing a user better options for tailoring their aesthetic preferences, while allowing for iterative editing which provides ease of use and flexibility of control. Yadav modified by Stein does not explicitly teach the user selection as a drag and drop operation. Quek, however, further teaches: Wherein the one or more processors, when loading and executing the one or more program codes, are further caused to perform: in response to a drag operation on the image selecting region, placing with an image corresponding to the drag operation, in a case that an end point of the drag operation is within the image region (Quek, Fig. 11 and ¶83: user interface with user selected collage template 1100 in grid style, and the template including a plurality of image receiving areas 1120, and interface including digital images 1110a-1110d; ¶84: The user can conveniently use a computer mouse to select an image symbol 1110a, drag it across the user interface, and dropped the image into the image receiving area 1125) Note that it is the combination of the user input for selecting and placing an image in a region as taught by Quek combined with the selection of an image to replace an already placed image in a template as provided by Yadav modified by Stein that teaches in response to a drag operation on any of the images in the image selecting region, replacing an image already existent at any position of the first spliced image with an image corresponding to the drag operation, in a case that an end point of the drag operation is within the first spliced image. Yadav, Stein and Quek are directed to systems and methods for user interfaces for combining a plurality of images into user specified layouts. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, including the interface control elements for changing the design layout for the plurality of images using different numbers of images and including the technique for easily editing images in the template as provided by Stein, by further using the graphical user interface control to allow a user to select and place an image into the template as provided by Quek, using known electronic interfacing and programming techniques. The modification results in an improved user interface for designing the coordination and grouping of photos together into a single image layout by allowing for a more intuitive selection and control of the individual objects, resulting in an easier to user interface. Regarding claim 3, the device of claim 12 performs the method of claim 3 and therefore claim 3 is rejected based on the same rationale as claim 12 set forth above. Claim(s) 2, 4-6, 11, and 13-15 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Yadav et al. (US 2020/0143514 A1) in view of Stein et al. (US 2018/0357694 A1) and Quek et al. (US 2010/0223568 A1) in further view of Van Os et al. (US 10,270,983 B1) Regarding claim 11, the limitations included from claim 10 are rejected based on the same rationale as claim 10 set forth above. Further regarding claim 11, Yadav modified by Stein and Quek discloses wherein the one or more processors, when loading and executing the one or more program codes, are further caused to perform: the images displayed in the image selecting region (See Fig. 6 of Yadav, digital image set 202; ¶99: processor executable instructions; ¶100 and ¶107 for processing; Also see Stein, Fig. 4 and ¶38: user able to select photo from photo collection 420 in the main design panel Also see Fig. 12 of Quek disclosing a region for selecting images, e.g. 1210a for placing on a template) Yadav modified by Stein and Quek fail to explicitly disclose switching the images displayed in the image selecting region in response to a swipe operation on the image selecting region The concept of having a scrolling section on the interface for allowing a user to scroll through more images that can be displayed on a screen at one time, using a swipe gesture, however, is a well known and conventional technique common to computer graphical interfaces. Van Os teaches: switching the images displayed in the image selecting region in response to a swipe operation on the image selecting region (Van Os, Figs. 6I-6K: In FIGS. 6I-6K, device 600 detects input 644 (e.g., a scrolling gesture on display 601) on the displayed list of avatar options 630. As input 644 moves in a leftward direction across display 601, device 600 displays avatar options 630 scrolling to the left to reveal additional avatar options (e.g., a poop avatar option and a fox avatar option); Also [76:64-66] in some embodiments, a horizontal swipe gesture on the avatar selection region scrolls the displayed avatar options to reveal an avatar creation affordance; Figs. 10AE-10AF and [82:53-56]: In FIG. 10AE, device 600 detects swipe gesture 1055 on effects option affordances 1024 and, in response, scrolls effects option affordances 1024 to display screen effects affordance 1024-3 in FIG. 10AF.) Yadav, Stein, Quek and Van Os are directed to systems and methods for graphical user interfaces for combining a plurality of images on a display area based on user selected preferences. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, including the interface control elements for changing the design layout for the plurality of images using different numbers of images and including the technique for easily editing images in the template as provided by Stein and the graphical user interface control to allow a user to select and place an image into the template as provided by Quek, by utilizing the tap and swipe gestures for operating interface elements as provided by Van Os, using known electronic interfacing and programming techniques. The modification results in an improved user interface for controlling a design of displayed images based on user selection of elements from a menu by allowing a more natural and intuitive control interface for navigating options on a limited screen space, while also allowing touch instead of a more cumbersome control device. Regarding claim 2, the device of claim 11 performs the method of claim 2 and therefore claim 2 is rejected based on the same rationale as claim 11 set forth above. Regarding claim 13, Yadav modified by Stein further discloses: Wherein the one or more processors, when loading and executing the one or more program codes, are further caused to perform: determining a second number in response to a in a case that the first number is greater than or equal to the second number, acquiring updated images to be spliced by determining a target number of images from the first images to be spliced and deleting the target number of images from the first images to be spliced; or in a case that the first number is less than the second number, acquiring updated images to be spliced by determining a target number of image from the target storage space and adding the target number of images to the images to be spliced; and displaying a second spliced image based on the updated images to be spliced and the second splicing template (Stein, Figs. 5A and 5B and ¶48: automatically change between product designs corresponding to selection panel; ¶49: Features in the second product design 550 that are not specified by the second product type can be automatically selected or created (step 345) by the intelligent product design creation engine 230. Such features can include properties in product style and product layout, and the of selections photos. One or more photo(s) in the second product design 550 can be kept the same as the last product design in the main panel 410 after the directional movement 500 as shown in FIG. 5B. Alternatively, the photo(s) in the second product design 550 can be automatically updated by the intelligent product design creation engine 230 in accordance to the new product type illustrated by the dynamic object 515; ¶48: In response to the directional movement 500, referring now to FIG. 2-4, 5A, 5B, the photo product design (400) in the main design panel 410 is automatically changed to a second product design 550 having a second product type corresponding to the dynamic object 515 in the product type selection panel 510 (step 340). The photo product design in the main design panel 410 before the change can be the initial product design 400 or another product design that has been changed from the initial product design 400 in product style, product layout or other product parameters. Figs. 5A to 5B show the updating of the number of selected images changing based on which design is used – note that Stein discloses a plurality of designs, having differing number of images as shown in the figures, and updating the number of images based on which product design is selected: PNG media_image1.png 497 516 media_image1.png Greyscale Disclosing that user selection of a different product design results in a change in the number of images that are displayed: PNG media_image2.png 135 333 media_image2.png Greyscale Also ¶34: The phrase “product layout” (or page layout) specifies the number, the sizes, the positions of images on a page, the gaps between the images and at the border of the page) Both Yadav and Stein are directed to systems and methods for user interfaces for combining a plurality of images into user specified layouts. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, to include the interface control elements for changing the design layout for the plurality of images using different numbers of images as provided by Stein, using known electronic interfacing and programming techniques. The modification merely substitutes one menu for selecting a design layout for controlling the layout of images for another, yielding predictable results of incorporating the design menu for selecting different image layouts with different number of images in place of different types of layouts for selection. Moreover, the modification results in an improved user interface for designing the coordination and grouping of photos together into a single image layout by allowing for a more diverse set of image layouts for more diverse options for a user and providing a user better options for tailoring their aesthetic preferences. The only limitation not taught by Yadav modified by Stein and Quek is the use of a “tap operation” as user input for performing a command on the user interface to perform an operation. Use of a tap input, however, is well-known and conventional for selecting an element to perform an operation such as selection of an element such as the product design, template taught by Yadav modified by Stein and Quek. However, Van Os teaches a response to a tap operation on a user interface element (Van OS, Fig. 8AI and [61:7-29] discloses user interface for selecting a menu object using a tap gesture – “Sticker options menu 856 and stickers 858 are similar to sticker options menu 656 and stickers 658 discussed above. The stickers are static graphical objects that may be selected by a user and applied to the image in image display region 820. In some embodiments a sticker can be selected by a tap gesture, and the corresponding sticker is then displayed at a position on the image display region 820.”) Yadav, Stein, Quek and Van Os are directed to systems and methods for graphical user interfaces for combining a plurality of images on a display area based on user selected preferences. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, including the interface control elements for changing the design layout for the plurality of images using different numbers of images and including the technique for easily editing images in the template as provided by Stein and the graphical user interface control to allow a user to select and place an image into the template as provided by Quek, by utilizing the tap gesture for operating interface elements as provided by Van Os, using known electronic interfacing and programming techniques. The modification results in an improved user interface for controlling a design of displayed images based on user selection of elements from a menu by allowing a more natural and intuitive control interface, while also allowing touch instead of a more cumbersome control device. Regarding claim 4, the device of claim 13 performs the method of claim 4 and therefore claim 4 is rejected based on the same rationale as claim 13 set forth above. Regarding claim 14, Yadav modified by Stein further discloses: wherein the one or more processors, when loading and executing the one or more program codes, are further caused to perform: in a case that the first number is greater than or equal to the second number, determining, based on selection sequence numbers corresponding to the images to be spliced, a first n images that appears in a selection sequence of the first images as the target number of images, wherein n is equal to the target number. (Stein, Figs. 5A and 5B and ¶48: automatically change between product designs corresponding to selection panel; ¶49: Features in the second product design 550 that are not specified by the second product type can be automatically selected or created (step 345) by the intelligent product design creation engine 230. Such features can include properties in product style and product layout, and the of selections photos. One or more photo(s) in the second product design 550 can be kept the same as the last product design in the main panel 410 after the directional movement 500 as shown in FIG. 5B. Alternatively, the photo(s) in the second product design 550 can be automatically updated by the intelligent product design creation engine 230 in accordance to the new product type illustrated by the dynamic object 515; ¶48: In response to the directional movement 500, referring now to FIG. 2-4, 5A, 5B, the photo product design (400) in the main design panel 410 is automatically changed to a second product design 550 having a second product type corresponding to the dynamic object 515 in the product type selection panel 510 (step 340). The photo product design in the main design panel 410 before the change can be the initial product design 400 or another product design that has been changed from the initial product design 400 in product style, product layout or other product parameters. Figs. 5A to 5B show the updating of the number of selected images changing based on which design is used; Also ¶34: The phrase “product layout” (or page layout) specifies the number of images on a page) Both Yadav and Stein are directed to systems and methods for user interfaces for combining a plurality of images into user specified layouts. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, to include the interface control elements for changing the design layout for the plurality of images using different numbers of images as provided by Stein, using known electronic interfacing and programming techniques. The modification merely substitutes one menu for selecting a design layout for controlling the layout of images for another, yielding predictable results of incorporating the design menu for selecting different image layouts with different number of images in place of different types of layouts for selection. Moreover, the modification results in an improved user interface for designing the coordination and grouping of photos together into a single image layout by allowing for a more diverse set of image layouts for more diverse options for a user and providing a user better options for tailoring their aesthetic preferences. Regarding claim 5, the device of claim 14 performs the method of claim 5 and therefore claim 5 is rejected based on the same rationale as claim 14 set forth above. Regarding claim 15, Yadav modified by Stein further discloses: Wherein the one or more processors, when loading and executing the one or more program codes, are further caused to perform at least one of: (Claimed as alternatives) in a case that the first number is less than the second number, determining, based on a sequence of images to be spliced in the target storage space, the target number of images adjacent to the images to be spliced as the second images to be added; determining, based on an image attribute condition, the target number of images to be added from the target storage space, wherein the image attribute condition refers to the second images having a target image attribute; or determining, based on an image data condition, the target number of images to be added from the target storage space, wherein the image data condition refers to image dates of the second images within a target period (Stein, ¶30: If users 70, 71 are members of a family or a group (e.g. a soccer team), the images from the cameras 62, 63 and the mobile device 61 can be grouped together to be incorporated into a photo product such as a photobook, or used in a blog page for an event such as a soccer game; ¶37: The initial photo product design 400 may be automatically created by the intelligent product design creation engine 230 based on the knowledge about the user's recent activities, social relationships, important events, hobbies, time and location information, mobile data, past product designs, and order histories; ¶¶54-55 discloses changing between two product designs/layouts, where features in the new design can be automatically selected, including one or more photos from the first product design can be kept, or photos in the second design can be automatically updated by the intelligent design creation engine) Both Yadav and Stein are directed to systems and methods for user interfaces for combining a plurality of images into user specified layouts. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, to include the interface control elements for changing the design layout for the plurality of images using different numbers of images and based on automatically associated features of the images as provided by Stein, using known electronic interfacing and programming techniques. The modification results in an improved user interface for designing the coordination and grouping of photos together into a single image layout by allowing for a more diverse set of image layouts for more diverse options for a user and providing a user better options for tailoring their aesthetic preferences, while automating the grouping of images based on relevant criteria automatically to better assist a user with design and image retrieval. Regarding claim 6, the device of claim 15 performs the method of claim 6 and therefore claim 6 is rejected based on the same rationale as claim 15 set forth above. Claim(s) 7 and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Yadav et al. (US 2020/0143514 A1) in view of Stein et al. (US 2018/0357694 A1) and Quek et al. (US 2010/0223568 A1) in further view of Cheng (US 2025/0200848 A1, with priority to CN 202210323839.3, filed Mar. 29, 2022) Regarding claim 16, the limitations included from claim 10 are rejected based on the same rationale as claim 10 set forth above. Further regarding claim 16, Yadav further discloses: Wherein the one or more processors, when loading and executing the one or more program codes, are further caused to perform: wherein said displaying the image splicing interface comprises: displaying the image splicing interface in response to a splicing function in the image editing interface being triggered (Yadav, ¶52: The user then navigates to an action menu 306, which presents a number of available options for image editing using the digital image set 202, and from the action menu 306, the user selects a create collage option 308, which causes a digital collage creation process to be initiated by the image editing application 108.; Fig. 4 and ¶53: In the scenario 400, and in response to selection of the create collage option 308 from the action menu 306, a template menu 402 is presented in the image editing GUI 122) Yadav modified by Stein and Quek does not explicitly disclose bringing up an editing interface from an image selecting interface as claimed. Cheng discloses: displaying an image selecting interface, wherein the image selecting interface is configured to display the plurality images in the target storage space; displaying an image editing interface in response to a select operation on at least one of the images in the image selection interface (Cheng, Fig. 1 and ¶35: a plurality of photos are displayed in the form of thumbnails in an interface of a “album” in the smart phone, where a user selects a to-be-edited photo from the “album” by means of clicking a thumbnail, and then the smart phone loads the selected to-be-edited photo, starts an image editing interface, and displays the to-be-edited photo in a display control (View) of the image editing interface, and then, the user may edit the image by using an image editing tool (a function control) provided in the image editing interface – further note display of image editing tools at bottom of image editing interface; ¶41: images stored in terminal device) Note that the teaching of the incorporating of a menu of tools provided by Cheng in the image editing screen brought up after user selects an image, along with the teaching of Yadav where a menu includes a selectable image collage element teaches the limitations of the claim. Yadav, Stein, Quek, and Cheng are directed to systems and methods for user interfaces for image editing. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, including the interface control elements for changing the design layout for the plurality of images using different numbers of images as provided by Stein, and using the graphical user interface control to allow a user to select and place an image into the template as provided by Quek, by further incorporating the technique of providing an image album for user selection that allows for further editing based on user selection of an image as provided by Cheng, using known electronic interfacing and programming techniques. The modification results in an improved user interface by allowing a user access to a photo album to view images without having to be in an editing page, allowing a user to view stored images more easily and not being overloaded with interface elements that might otherwise be distracting or taking up unnecessary screen space, but also allowing additional editing functionality to a user when desired. Regarding claim 7, the device of claim 16 performs the method of claim 7 and therefore claim 7 is rejected based on the same rationale as claim 16 set forth above. Claim(s) 8 and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Yadav et al. (US 2020/0143514 A1) in view of Stein et al. (US 2018/0357694 A1) and Quek et al. (US 2010/0223568 A1) in further view of Cheng (US 2025/0200848 A1, with priority to CN 202210323839.3, filed Mar. 29, 2022) and Van Os et al. (US 10,270,983 B1). Regarding claim 17, the limitations included from claim 10 are rejected based on the same rationale as claim 10 set forth above. Further regarding claim 17, Yadav further discloses: Wherein the one or more processors, when loading and executing the one or more program codes, are further caused to perform: displaying an application function interface, wherein the application function interface comprises a splicing function control (Yadav, ¶52: The user then navigates to an action menu 306, which presents a number of available options for image editing using the digital image set 202, and from the action menu 306, the user selects a create collage option 308, which causes a digital collage creation process to be initiated by the image editing application 108); and Displaying an image selecting interface(Yadav, ¶52: digital collage creation process initiated by the image editing application 108; Fig. 4 and ¶53: In the scenario 400, and in response to selection of the create collage option 308 from the action menu 306, a template menu 402 is presented in the image editing GUI 122, where figure 4 shows interface including Digital image Set 202) Also Yadav discloses displaying an image splicing interface (Yadav, Fig. 4 and ¶55: GUI 122, including template menu 402 – i.e. “template selecting region” – where templates are selected from collage templates 118, digital image set 202 in image queue 304, see ¶53 – i.e. “image selecting region”; ¶52: collage creation module 110 utilizes an image layout selection module 224 to select an image layout 226 for generating a digital collage 120a, generating digital collage 120a by placing digital image set 202 into frames of image layout 226; See Fig. 4, working canvas 302 – i.e. “spliced-image previewing region”) Yadav modified by Stein and Quek does not explicitly disclose bringing up an editing interface from an image selecting interface as claimed. Cheng discloses: wherein said displaying the image [editing] interface comprises: displaying the image splicing interface in response to a select operation on at least one image in the image selecting interface. (Cheng, Fig. 1 and ¶35: a plurality of photos are displayed in the form of thumbnails in an interface of a “album” in the smart phone, where a user selects a to-be-edited photo from the “album” by means of clicking a thumbnail, and then the smart phone loads the selected to-be-edited photo, starts an image editing interface, and displays the to-be-edited photo in a display control (View) of the image editing interface, and then, the user may edit the image by using an image editing tool (a function control) provided in the image editing interface – further note display of image editing tools at bottom of image editing interface; ¶41: images stored in terminal device) The combination of the teachings of loading a particular editing interface based on selection of an image in an image selecting interface as taught by Cheng with the teachings of the known editing interface in the form of the splicing interface of Yadav teaches the particulars of the claim limitation. Yadav, Stein, Quek, and Cheng are directed to systems and methods for user interfaces for image editing. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, including the interface control elements for changing the design layout for the plurality of images using different numbers of images as provided by Stein, and using the graphical user interface control to allow a user to select and place an image into the template as provided by Quek, by further incorporating the technique of providing an image album for user selection that allows for further editing based on user selection of an image as provided by Cheng, using known electronic interfacing and programming techniques. The modification results in an improved user interface by allowing a user access to a photo album to view images without having to be in an editing page, allowing a user to view stored images more easily and not being overloaded with interface elements that might otherwise be distracting or taking up unnecessary screen space, but also allowing additional editing functionality to a user when desired. The only remaining limitation not taught by Yadav, Stein, Quek, and Cheng is the particular arrangement of GUI parts, namely displaying an image selectin interface in response to a tap on a function control (note Yadav teaches displaying an image selecting interface based on a user input selecting a splicing function control, as the displayed user interface displayed after the selection of the collage menu element includes the image selecting interface 202 – see Fig. 4, but the particular input type is not recited). The technique of displaying an image selection interface in response to a user tap operation, however, is well-known, as the claim is merely reciting that a user uses a tap input to bring up another user interface element from which additional image elements can be selected. Van Os teaches: Displaying an image selecting interface in response to a tap operation on the function control (Van Os, Fig. 6D and [39:62-63]: In FIG. 6D, device 600 detects input 623 (e.g., a tap gesture on display 601) on effects affordance 622; Fig. 6E and [39:64-40:26] discloses in response to detecting input 623, device updates image data and also updates camera operations region 625, replacing camera option affordances 619 with visual effects option affordances 624, where the visual effects option affordances include avatar effects affordance 624-1 and sticker effects affordance 624-2 and visual effects option affordances 624 correspond to different visual effects that can be applied to the image displayed in image display region 620) Note that the combination of the teaching of a user selecting a splicing function control to bring up new user interface elements for the combining of the images as taught by Yadav, with the image selection technique of Cheng, combined with the teachings of displaying an additional interface for selecting elements provided by Van Os teaches the limitations of the claims in view of what would be obvious in to one of ordinary skill in the art at the time of the invention, namely within the field of graphical user interface design and implementation. Yadav, Stein, Quek, Cheng and Van Os are directed to systems and methods for graphical user interfaces for editing images on a display area based on user selected preferences. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, including the interface control elements for changing the design layout for the plurality of images using different numbers of images and including the technique for easily editing images in the template as provided by Stein and the graphical user interface control to allow a user to select and place an image into the template as provided by Quek, by further incorporating the technique of providing an image album for user selection that allows for further editing based on user selection of an image as provided by Cheng, by utilizing the tap and swipe gestures for operating interface elements as provided by Van Os, using known electronic interfacing and programming techniques. The modification results in an improved user interface for controlling a design of displayed images based on user selection of elements from a menu by allowing a more natural and intuitive control interface for navigating options on a limited screen space, while also allowing touch instead of a more cumbersome control device. Regarding claim 8 the device of claim 17 performs the method of claim 8 and therefore claim 8 is rejected based on the same rationale as claim 17 set forth above. Claim(s) 9 and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over: Yadav et al. (US 2020/0143514 A1) in view of Stein et al. (US 2018/0357694 A1) and Quek et al. (US 2010/0223568 A1) in further view of Faulkner et al. (US 2019/0377586 A1). Regarding claim 18, the limitations included from claim 10 are rejected based on the same rationale as claim 10 set forth above. Further regarding claim 18, Yadav further discloses: Wherein the one or more processors, when loading and executing the one or more program codes, are further caused to perform: determining a splicing template whose number of images matches the first number in the third splicing templates as the first splicing template or determining a splicing template whose number of images matches the first number in the fourth splicing templates as the first splicing template (Yadav, ¶53: the template set 404 is selected such that a number of digital frames included in each collage template is equal to a number of images in the digital image set 202; Fig. 4 and ¶54: template menu 402 presents the template set 404 sorted into different categories, including landscape templates 406 that present collage template options in a landscape orientation, and portrait templates 408 that present collage template options in a portrait orientation) Although Yadav discloses selecting collage templates based on a determined parameter (e.g. ¶¶86-87 discusses selecting a template based on an optimal calculated error value), Yadav does not explicitly teach the selection criteria being history of user or usage of all users as claimed. Stein teaches determining a third splicing template in each group of splicing templates based on a history splicing process of a user (Stein, ¶37: The initial photo product design 400 may be automatically created by the intelligent product design creation engine 230 based on the knowledge about the user's recent activities, social relationships, important events, hobbies, time and location information, mobile data, past product designs, and order histories; ¶45: the dynamic objects 535, 536 in the product layout selection panel 520 are automatically generated by the intelligent product design creation engine 230 based on the product layouts stored in the product layout library 228.) Both Yadav and Stein are directed to systems and methods for user interfaces for combining a plurality of images into user specified layouts. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, to include the interface control elements for changing the design layout for the plurality of images using different numbers of images and based on automatically associated features of the images as provided by Stein, using known electronic interfacing and programming techniques. The modification results in an improved user interface for designing the coordination and grouping of photos together into a single image layout by allowing for a more diverse set of image layouts for more diverse options for a user and providing a user better options for tailoring their aesthetic preferences, while automating the grouping of images based on relevant criteria automatically to better assist a user with design and image retrieval. Faulkner discloses: Determining a third splicing template in each group of splicing templates based on a history splicing process of a user, wherein the third splicing template in each group of splicing templates is a splicing template whose use frequency is ranked at a top place; or determining a fourth splicing template in each group of splicing templates based on usage of the splicing templates by all users, wherein the fourth splicing template in each group of splicing templates is a splicing template whose number of users is ranked at a top place (Faulkner, Abstract: The disclosed system generates a customized layout based on an analysis of characteristics of graphical items to be displayed, where the graphical items can include file content of different types (e.g., text, images, etc.), and the system can analyze preferred characteristics that are based on previously selected graphical items and previously used layouts, the system can then configure a customized layout that includes one or more display areas, where each display area contains at least one graphical item and the preferred characteristics can be used to automatically select graphical items that have a characteristic that correlates with a characteristic of a previously selected graphical item, and presenting the customized layout to the user; ¶¶56-57: layout generation module ranks multiple candidate layouts by a ranking algorithm 414, where: the ranking algorithm 414 can be learned based on a user's personal history of configuring layouts. That is, at least some of the aforementioned parameters can be learned and/or adjusted based on previous graphical item selections and/or previous arrangements of selected graphical items. Consequently, at least some of the parameters of the algorithm used to rank the different layouts can be tuned and/or updated based on a user's personal history. This enables the algorithm to adapt to a “style” and/or tailor to behaviors of a user as they relate to configuring a layout of graphical items. Additionally or alternatively, the parameters of the algorithm can be learned and/or adjusted based on a general history of a population of users. ¶105: prioritize based on activity performed by more important people or streams shared; ¶106: at least some of the aforementioned parameters 650 can be learned and/or adjusted based on behavioral characteristics (e.g., previous layout selections) of the host or the producer considering detection of particular activity – note that accounting for a previous selection indicates frequency of use, namely if used at least once; ¶107: Additionally or alternatively, the parameters of the algorithm can be learned and/or adjusted based on a general history of a population of users (e.g., common layout selections considering detection of particular activity) – note that determination of “common layout” indicates frequency of use, i.e. determining selection is performed often compared to others; ¶123: ranking produced by the algorithm can provide that candidate layout 806(1) is the highest ranked candidate layout and can place the candidate layout 806(1) in a first recommended position (e.g., from left to right on the display screen), and provide that candidate layout 806(2) is the second highest ranked candidate layout and can place the candidate layout 806(2) in a second recommended position (e.g., from left to right on the display screen), and so forth; Fig. 9A and ¶¶126-127: GUI displays layout staging area 902 for producer to identify and select a next layout to be shared, where the candidate layout displayed in the next layout staging area 902 is automatically determined based on the ranking produced by the algorithm, where “the algorithm provides that candidate layout 806(1) is the highest ranked candidate layout, and thus, the candidate layout 806(1) is placed in a first recommended position (e.g., from left to right on the display screen) and also is automatically displayed in the next layout staging area 902 so the host or the producer can preview the next layout and select a control option (e.g., “go live” control option 904) to make the candidate layout displayed in the next layout staging area 902 be displayed in the presentation area 804) Yadav, Stein, Quek and Faulkner are directed to systems and methods for graphical user interfaces for combining a plurality of images on a display area based on user selected preferences. It would have been obvious to one of ordinary skill in the art, before the effective filing date of the claimed invention and with a reasonable expectation of success, to modify the user interface having a plurality of interactive regions for controlling the combination and modification of a plurality of images from a selected set of images as provided by Yadav, including the interface control elements for changing the design layout for the plurality of images using different numbers of images and including the technique for easily editing images in the template as provided by Stein and the graphical user interface control to allow a user to select and place an image into the template as provided by Quek, by utilizing the ranking of layouts based on user history as provided by Faulkner, using known electronic interfacing and programming techniques. The modification results in an improved user interface for controlling a design of displayed images by providing easier access to more likely relevant or desired design elements (see Faulkner, ¶5 – “Such an improved user interaction can lead to the reduction of inadvertent inputs and redundant inputs, and based on which other efficiencies, including production efficiencies, network efficiencies, processing efficiencies, memory efficiencies, and network usage efficiencies, can be improved.”) Regarding claim 9, the device of claim 18 performs the method of claim 9 and therefore claim 9 is rejected based on the same rationale as claim 18 set forth above. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to WILLIAM A BEUTEL whose telephone number is (571)272-3132. The examiner can normally be reached Monday-Friday 9:00 AM - 5:00 PM (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, DANIEL HAJNIK can be reached at 571-272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /WILLIAM A BEUTEL/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Dec 28, 2023
Application Filed
Dec 31, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12581262
AUGMENTED REALITY INTERACTION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 17, 2026
Patent 12572258
APPARATUS AND METHOD WITH IMAGE PROCESSING USER INTERFACE
2y 5m to grant Granted Mar 10, 2026
Patent 12566531
CONFIGURING A 3D MODEL WITHIN A VIRTUAL CONFERENCING SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12561927
MEDIA RESOURCE DISPLAY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 24, 2026
Patent 12554384
SYSTEMS AND METHODS FOR IMPROVED CONTENT EDITING AT A COMPUTING DEVICE
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
70%
Grant Probability
90%
With Interview (+20.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 469 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month