Prosecution Insights
Last updated: April 19, 2026
Application No. 18/764,005

COLLAGE GENERATION OF COMPLEMENTARY OBJECTS

Non-Final OA §103
Filed
Jul 03, 2024
Examiner
CHEN, BIAO
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Pinterest Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
27 granted / 32 resolved
+22.4% vs TC avg
Strong +26% interview lift
Without
With
+26.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
57
Total Applications
across all art units

Statute-Specific Performance

§101
4.7%
-35.3% vs TC avg
§103
69.1%
+29.1% vs TC avg
§102
9.8%
-30.2% vs TC avg
§112
15.7%
-24.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 32 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because of the following informalities: Inage 23, “complementary image segment determination engine 820” should read “complementary image segment determination engine 802”. Appropriate correction is required. Claim Objections Claim 19 is objected to because of the following informalities: In claim 3, line 7, “object segments” should read “image segments”. In claim 12, line 3, “segmented extracted” should read “segment extracted”, and “includes” should read “including”. In claim 13, line 7, “object segments” should read “image segments”. In claim 19, line 2, “the first plurality of images are extracted” should read “the first plurality of image segments are extracted”. Appropriate correction is required. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-4, 6-8, 10-13, 15-16, 18, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Temple et al. (WO2023115044 A1, hereinafter “Temple”) in view of Diakopoulos et al. (Mediating Photo Collage Authoring, UIST05: The 18th Annual ACM Symposium on User Interface Software and Technology, October 23 - 26, 2005, hereinafter “Diakopoulos”). Regarding claim 1, Temple discloses A computer-implemented method, comprising: (para. [0095], “Implementations disclosed herein may include a computer-implemented method”). receiving a first image segment that is extracted from a first image and includes a representation of an object of interest; (para. [0095], “one or more of processing a first image to determine a first image segment that corresponds to less than all of the first image and corresponds to an object represented in the first image, presenting, on a user device, the first image and the first image segment such that the first image segment is visually distinguished from the first image”; FIG. 1A: a first image “112” with a bottle as an object of interest; FIG. 1B: “112-2” of the bottle as a first image segment that is extracted from image “112”). Note that: (1) the object (bottle) can be regarded as an object of interest while a corresponding first image segment (bottle) is distinguished; and (2) the first image segment (bottle) is determined or extracted by processing a first image (a first image “112” with a bottle). determining, based at least in part on the object of interest, a first plurality of objects that are complementary to the object of interest and are represented in a first plurality of image segments extracted from a first plurality of images; (para. [0023], “In some implementations, additional images 124, image segments, and/or extracted image segments, such as images/extracted image segments that are visually similar to the image segment 112-2 may also be presented on the user interface of the device 100 in response to a user selection of an image 112. For example, in some implementations, the popularity or frequency of extracted image segments used on other collages by the same or other users may be monitored and popular or trending extracted image segments presented to the user as additional images 124”; FIG. 1A: images and image segments (glass, bottle, car, table, chairs, and shoe); FIG. 1B: for bottle “112-2”, there are additional images “124”; FIG. 1E: images and image segments (TV, pickup truck, lamp, sofa, airplane); FIG. 1F: images and segments (lamp and lights)). Note that: (1) the images with glass, bottle car, table, chairs, and shoe in FIG. 1A, the additional images 124 in FIG. 1B, the images with TV, lamp, and lights in FIG. 1E, the additional images (lamps and lights) in FIG. 1F, can be mapped into a first plurality of images; (2) the objects in the first plurality of images (glass, bottle, car, table, chairs, shoe, TV, lamp, and lights) can be mapped into a first plurality of objects that are complementary to the object of interest (bottle presented by the image segment 112-2) corresponding to the user’s same life style; and (3) the image segments (glass, bottle, car, table, chairs, shoe, TV, lamp, and lights) representing the objects, as a first plurality of image segments, can be extracted or determined from the first plurality of the images (the objects in FIG. 1A and “124” in FIG. 1B) using the processing in para. [0023] of Temple above. storing the collage as a content item configured to be stored and maintained by an online service, wherein: (para. [0085], “a collage management component 1014 that maintains, for example, collages created and/or viewed by the user of the user device, extracted image segments, etc., and/or performs some or all of the implementations discussed herein”; para. [0092], “The data store 1103 can include several separate data table, databases or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store 1103 may include digital items (e.g., images) and corresponding metadata (e.g., image segments, popularity, source) about those items. Collage data and/or user information and/or other information may likewise be stored in the data store”). Note that: the data store as an online service stores and maintains collage data as digital content items created and/or viewed by the user of the user device. the collage includes a first respective link to each of the first image segment and the first plurality of image segments; (FIG. 7A: the collage 740 includes five extracted image segments 743-1, 743-2, 743-4, 743-5, 743-6, and a typed text input 743-3 of "MY CHRISTMAS LIST"; para. [0072], “The extracted image segments of the collage 7 40 may be processed by the example process 600 and a determination made that extracted image segments 743-1 (bicycle), 743-2 (cowboy hat), and 743-5 (book) correspond to buyable objects. As such, a buyable indication 745-1, 745-2, and 745-3 are presented next to the respective extracted image segment. In this example, the object of a sweater that is represented by the extracted image segment 743-4 may have been previously indicated as buyable and now indicated as purchased, through presentation of the purchased indicator 747.”). Note that: (1) Among the extracted image segments, 743-1 (bicycle) replacing the first image segment “bottle” can be regarded as the first image segment in one example for explanation purpose only here; (2) 745-2 (hat), 743-4 (sweater) and 745-5 (book) replacing the first plurality of image segments (glass, bottle, car, table, chairs, shoe, TV, lamp, and lights) can be regarded the first plurality of image segments in one example for explanation purpose only here; and (3) the buyable or purchased indicator (745-1, 745-2, 747, or 745-3) for each of 743-1(bicycle) (the first image segment) and 745-2 (hat) / 743-4 (sweater) / 745-5 (book) (the first plurality of image segments) can be regarded as a first respective link. the first image segment includes a second link to the first image; (para. [0095], “a first extracted image segment that includes at least pixel data corresponding to pixels of the first image segment, and metadata indicating at least one of the first image or a source location of the first image”). Note that: the metadata indicating at least one of the first image or a source location of the first image can be regarded as a second link to the first image. each of the first plurality of image segments includes a third respective link to a corresponding image from an image of the first plurality of images from which it was extracted. (para. [0045], “the metadata may include, but is not limited to, an indication of the image from which the image segment was extracted”). Note that: the metadata of each image segment of the first plurality of image segments can be mapped into a third respective link to the corresponding image from which the image segment is extracted. causing the collage to be presented on a client device; and (FIG. 7A: a client device (e.g., smart phone) showing a collage). Note that: “a collage” can be regarded as the collage to be presented. Temple fails to disclose, but in the same art of computer graphics, Diakopoulos discloses determining, based at least in part on the object of interest and the first plurality of objects, a collage layout template from a plurality of collage layout templates, wherein the collage layout specifies an arrangement of the first image segment and the first plurality of image segments to form a collage; (Diakopoulos, page 183, Abstract, “Our layout method utilizes a pre-designed template which consists of cells for photos and annotations applied to these cells. The layout is then filled by matching the metadata of photos to the annotations in the cells using an optimization algorithm”; page 184, col. right, para. 2, “This template consists of an arrangement of empty cells for each photo to be included. Layout diversity is provided to the user through a library of layout templates offering varying arrangements of cell sizes, shapes, and quantities. Either the user can select an existing template, a suitably sized one can be generated automatically for a given quantity of photos”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) the layout template can be determined or selected from a library of layout templates (a plurality of collage layout templates); and (2) the user can select a layout template that is based on a given quantity of photos or a total number of objects (the object of interest and the first plurality of objects) with an arrangement of empty cells for each photo (object corresponding image segment) to be included. generating, based at least in part on the collage layout, the collage that includes the first image segment and the first plurality of image segments in the arrangement specified by the collage layout; (Diakopoulos, page 183, Abstract, “we present an approach that supports efficient construction of a collage by assisting the user with an automatic layout procedure that can be controlled at a high level; page 186, col. right, Figure 3: “A collage showing a trip through Ireland”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) the collage is generated or constructed with the collage layout and the image segments (the first image segment and the first plurality of image segments); and (2) the image segments can be arranged by the layout template. Temple and Diakopoulos are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply collage layout template, as taught by Diakopoulos into Temple. The motivation would have been “we present an approach that supports efficient construction of a collage by assisting the user with an automatic layout procedure that can be controlled at a high level.” (Diakopoulos, page 183, Abstract). The suggestion for doing so would allow to efficiently generate a collage. Therefore, it would have been obvious to combine Temple and Diakopoulos. Regarding claim 2, Temple in view of Diakopoulos discloses The computer-implemented method of claim 1, further comprising: receiving, in response to presenting of the collage on the client device, a selection of at least one second image segment from the first plurality of image segments and the first image segment; (Temple, para. [0018], “A user may select an image segment and the pixels of the image corresponding to the selected image segment are extracted to generate an extracted image segment”; FIG. 1A: images and image segments (glass, bottle, car, table, chairs, and shoe); FIG. 1B: for bottle “112-2”, there are additional images “124”; FIG. 1E: images and image segments (TV, pickup truck, lamp, sofa, airplane); FIG. 1F: images and segments (lamp and lights)). Note that: after the collage has been generated and presented on the user’s device (the client device), the user can select at least one second segment (lamp) from the first plurality of image segments (glass, bottle, car, table, chairs, shoe, TV, lamp, and lights) and the first image segment (bottle). determining a second plurality of objects that are complementary to objects represented in the at least one second image segment and are represented in a second plurality of image segments extracted from a second plurality of images; (Temple, para. [0095], “one or more of processing a first image to determine a first image segment that corresponds to less than all of the first image and corresponds to an object represented in the first image, presenting, on a user device, the first image and the first image segment such that the first image segment is visually distinguished from the first image”). Note that: (1) TV, pickup truck, lamp, and lights are mapped into a second plurality of objects while the corresponding images are mapped into a second plurality of images; and (2) the second plurality of objects (TV, pickup truck, lamp, and lights) are related to life style and complementary to objects represented in the at least one second image segment (lamp) while they are extracted from the corresponding second plurality of images. causing the second collage to be presented on the client device. (Temple, FIG. 7A: a client device (e.g., smart phone) showing a collage). Note that: “a collage” can be regarded as the second collage. determining, based at least in part on the at least one second image segment and the second plurality of image segments, a second collage layout template from the plurality of collage layout templates, wherein the second collage layout specifies a second arrangement of the at least one second image segment and the second plurality of image segments to form a second collage; (Diakopoulos, page 183, Abstract, “Our layout method utilizes a pre-designed template which consists of cells for photos and annotations applied to these cells. The layout is then filled by matching the metadata of photos to the annotations in the cells using an optimization algorithm”; page 184, col. right, para. 2, “This template consists of an arrangement of empty cells for each photo to be included. Layout diversity is provided to the user through a library of layout templates offering varying arrangements of cell sizes, shapes, and quantities. Either the user can select an existing template, a suitably sized one can be generated automatically for a given quantity of photos”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) a second layout template can be determined or selected from a library of layout templates (the plurality of collage layout templates); and (2) the user can select a layout template that is based in part on at least one second image segment (lamp) and the second plurality of image segments (TV, pickup truck, lamp, and lights) with an arrangement of empty cells as a second arrangement for each photo (object corresponding image segment) to be included to form a collage as a second collage. generating, based at least in part on the second collage layout, the second collage that includes the at least one second image segment and the second plurality of image segments in the second arrangement specified by the second collage layout; and (Diakopoulos, page 183, Abstract, “we present an approach that supports efficient construction of a collage by assisting the user with an automatic layout procedure that can be controlled at a high level; page 186, col. right, Figure 3: “A collage showing a trip through Ireland”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) the second collage is generated or constructed with the collage layout and at least one second image segment (lamp) and the second plurality of image segments in the second arrangement specified by the second collage layout; (2) one can adopt or repeat the similar collage generation method for claim 1 above; and (3) the image segments can be arranged by the layout template. The motivation to combine Temple and Diakopoulos given in claim 1 is incorporated here. Regarding claim 3, Temple in view of Diakopoulos discloses The computer-implemented method of claim 1, further comprising: receiving, in response to presenting of the collage on the client device, a selection of the first image segment; (Temple, para. [0018], “A user may select an image segment and the pixels of the image corresponding to the selected image segment are extracted to generate an extracted image segment”; FIG. 1A: images and image segments (glass, bottle, car, table, chairs, and shoe); FIG. 1B: for bottle “112-2”, there are additional images “124”; FIG. 1E: images and image segments (TV, pickup truck, lamp, sofa, airplane); FIG. 1F: images and segments (lamp and lights)). Note that: after the collage has been generated and presented on the user’s device (the client device), the user can select the first segment (bottle). determining, based at least in part on the object of interest, a second plurality of objects that are complementary to the object of interest and are represented in a second plurality of image segments extracted from a second plurality of images, wherein the second plurality of object segments were not included in the first plurality of image segments; and (Temple, para. [0095], “one or more of processing a first image to determine a first image segment that corresponds to less than all of the first image and corresponds to an object represented in the first image, presenting, on a user device, the first image and the first image segment such that the first image segment is visually distinguished from the first image”). Note that: (1) pickup truck, sofa, and airplane in FIG. 1E above are mapped into a second plurality of objects that are related to life style and complementary to the object of interest (bottle) while they are presented in a corresponding second image segments (pickup truck, sofa, and airplane) that are extracted from a corresponding second plurality of images shown in FIG. 1E; and (2) the second plurality of object segments (i.e., pickup truck, sofa, and airplane) were not included in the first plurality of image segments (i.e., glass, bottle, car, table, chairs, shoe, TV, lamp, and lights). causing the second collage to be presented on the client device. (Temple, FIG. 7A: a client device (e.g., smart phone) showing a collage). Note that: “a collage” can be regarded as the second collage. determining, based at least in part on the object of interest and the second plurality of objects, a second collage layout template from the plurality of collage layout templates, wherein the second collage layout specifies a second arrangement of the first image segment and the second plurality of image segments to form a second collage; (Diakopoulos, page 183, Abstract, “Our layout method utilizes a pre-designed template which consists of cells for photos and annotations applied to these cells. The layout is then filled by matching the metadata of photos to the annotations in the cells using an optimization algorithm”; page 184, col. right, para. 2, “This template consists of an arrangement of empty cells for each photo to be included. Layout diversity is provided to the user through a library of layout templates offering varying arrangements of cell sizes, shapes, and quantities. Either the user can select an existing template, a suitably sized one can be generated automatically for a given quantity of photos”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) a second layout template can be determined or selected from a library of layout templates (the plurality of collage layout templates); and (2) the user can select a layout template that is based in part on a given quantity of photos or a total number of the object of interest (bottle) and the second plurality of image segments (i.e., pickup truck, sofa, and airplane) with an arrangement of empty cells as a second arrangement for each photo (object corresponding image segment) to be included to form a collage as a second collage. generating, based at least in part on the second collage layout, the second collage that includes the first image segment and the second plurality of image segments in the second arrangement specified by the second collage layout; (Diakopoulos, page 183, Abstract, “we present an approach that supports efficient construction of a collage by assisting the user with an automatic layout procedure that can be controlled at a high level; page 186, col. right, Figure 3: “A collage showing a trip through Ireland”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) the second collage is generated or constructed with the collage layout and at least one second image segment (lamp) and the second plurality of image segments in the second arrangement specified by the second collage layout; (2) one can adopt or repeat the similar collage generation method for claim 1 above; and (3) the image segments can be arranged by the layout template. The motivation to combine Temple and Diakopoulos given in claim 1 is incorporated here. Regarding claim 4, Temple in view of Diakopoulos discloses The computer-implemented method of claim 1, further comprising: receiving, in response to presenting of the collage on the client device, a selection of at least one second image segment from the first plurality of image segments and the first image segment; (Temple, para. [0018], “A user may select an image segment and the pixels of the image corresponding to the selected image segment are extracted to generate an extracted image segment”; FIG. 1A: images and image segments (glass, bottle, car, table, chairs, and shoe); FIG. 1B: for bottle “112-2”, there are additional images “124”; FIG. 1E: images and image segments (TV, pickup truck, lamp, sofa, airplane); FIG. 1F: images and segments (lamp and lights)). Note that: after the collage has been generated and presented on the user’s device (the client device), the user can select at least one second segment (lamp) from the first plurality of image segments (glass, bottle, car, table, chairs, shoe, TV, lamp, and lights) and the first image segment (bottle). receiving a secondary query input; determining, based at least in part on the at least one second image segment and the secondary query input, a plurality of responsive content items; and (Temple, para. [0078], “other training techniques or processes may be used to train a DNN to receive an input image and determine one or more image segments corresponding to objects represented in the input image”; FIG. 1F: a plurality of responsive content items are shown in the bottom part related to the lamp). Note that: (1) “an input image” to a trained DNN can be regarded as a secondary query input and the input image can be an image related to or including the at least one second image segment (lamp); and (2) the DNN can determine one or more image segments as a plurality of responsive content items based on the input image and the second image segment (lamp). causing at least a portion of the responsive content items to be presented on the client device. (Temple, FIG. 1F: a plurality of responsive content items are shown in the bottom part related to the lamp on a smart phone). Note that: it is obvious to one having ordinary skills in the art that a portion or all of the responsive content items can be displayed or presented on the client device (a smart phone). Regarding claim 6, Temple in view of Diakopoulos discloses A computing system, comprising: one or more processors; and a memory storing program instructions that, when executed by the one or more processors, cause the one or more processors to at least: (Temple, FIG. 10: the user device as a computing comprises “PROCESSOR(s)”, “MEMORY”, and “APPLICATION”). Note that: conventionally the program instructions of the application are stored in the memory to be executed in processor(s). receive a first plurality of image segments; (Temple, para. [0023], “In some implementations, additional images 124, image segments, and/or extracted image segments, such as images/extracted image segments that are visually similar to the image segment 112-2 may also be presented on the user interface of the device 100 in response to a user selection of an image 112. For example, in some implementations, the popularity or frequency of extracted image segments used on other collages by the same or other users may be monitored and popular or trending extracted image segments presented to the user as additional images 124”; FIG. 1A: images and image segments (glass, bottle, car, table, chairs, and shoe); FIG. 1B: for bottle “112-2”, there are additional images “124”; FIG. 1E: images and image segments (TV, pickup truck, lamp, sofa, airplane); FIG. 1F: images and segments (lamp and lights). Note that: the image segments (glass, bottle, and table) representing the objects in FIG 1A, as a first plurality of image segments, can be extracted or determined from the first plurality of the images using the processing in para. [0023] of Temple above. cause the collage to be presented on a client device. (Temple, FIG. 7A: a client device (e.g., smart phone) showing a collage). Note that: “a collage” can be regarded as the collage to be presented. determine, based at least in part on the first plurality of image segments, a collage layout that specifies an arrangement of the first plurality of image segments; (Diakopoulos, page 183, Abstract, “Our layout method utilizes a pre-designed template which consists of cells for photos and annotations applied to these cells. The layout is then filled by matching the metadata of photos to the annotations in the cells using an optimization algorithm”; page 184, col. right, para. 2, “This template consists of an arrangement of empty cells for each photo to be included. Layout diversity is provided to the user through a library of layout templates offering varying arrangements of cell sizes, shapes, and quantities. Either the user can select an existing template, a suitably sized one can be generated automatically for a given quantity of photos”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) the layout template can be determined or selected from a library of layout templates (a plurality of collage layout templates); and (2) the user can select a layout template that is based on a given quantity of photos or a number of objects / image segments (first plurality of objects corresponding to the first plurality of image segments) with an arrangement of empty cells for each photo (object corresponding image segment) to be included. generate, based at least in part on the collage layout, a collage that includes the first plurality of image segments in the arrangement specified by the collage layout; and (Diakopoulos, page 183, Abstract, “we present an approach that supports efficient construction of a collage by assisting the user with an automatic layout procedure that can be controlled at a high level; page 186, col. right, Figure 3: “A collage showing a trip through Ireland”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) the collage is generated or constructed with the collage layout and the image segments (the first plurality of image segments); and (2) the image segments can be arranged by the layout template. The motivation to combine Temple and Diakopoulos given in claim 1 is incorporated here. Regarding claim 7, Temple in view of Diakopoulos discloses The computing system of claim 6, wherein the first plurality of image segments are extracted from a scene presented in a first image and include representations of a plurality of objects represented in the scene. (Temple, para. [0095], “one or more of processing a first image to determine a first image segment that corresponds to less than all of the first image and corresponds to an object represented in the first image, presenting, on a user device, the first image and the first image segment such that the first image segment is visually distinguished from the first image”; FIG. 1A: a first image “112” with a bottle, a glass, and a table; FIG. 1B: “112-2” of the bottle as a first image segment that is extracted from image “112”). Note that: (1) image 112 can be regarded as a first image; and (2) a plurality of image segments as representations of a plurality of objects (a bottle, a glass, and a table) are extracted or determined from the first image by processing the first image. Regarding claim 8, Temple in view of Diakopoulos discloses The computing system of claim 6, wherein: the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least cause the collage to be stored as a content item; (Temple, para. [0085], “a collage management component 1014 that maintains, for example, collages created and/or viewed by the user of the user device, extracted image segments, etc., and/or performs some or all of the implementations discussed herein”; para. [0092], “The data store 1103 can include several separate data table, databases or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store 1103 may include digital items (e.g., images) and corresponding metadata (e.g., image segments, popularity, source) about those items. Collage data and/or user information and/or other information may likewise be stored in the data store”). Note that: the data store as an online service stores and maintains collage data as digital content items created and/or viewed by the user of the user device. the collage includes a first respective link to each of the first plurality of image segments; (Temple, FIG. 7A: the collage 740 includes five extracted image segments 743-1, 743-2, 743-4, 743-5, 743-6, and a typed text input 743-3 of "MY CHRISTMAS LIST"; para. [0072], “The extracted image segments of the collage 740 may be processed by the example process 600 and a determination made that extracted image segments 743-1 (bicycle), 743-2 (cowboy hat), and 743-5 (book) correspond to buyable objects. As such, a buyable indication 745-1, 745-2, and 745-3 are presented next to the respective extracted image segment. In this example, the object of a sweater that is represented by the extracted image segment 743-4 may have been previously indicated as buyable and now indicated as purchased, through presentation of the purchased indicator 747.”). Note that: (1) 745-2 (hat), 743-4 (sweater) and 745-5 (book) replacing the first plurality of image segments (glass, bottle, and table) can be regarded the first plurality of image segments in one example for explanation purpose only here; and (2) the buyable or purchased indicator (745-2, 747, or 745-3) for each of 745-2 (hat) / 743-4 (sweater) / 745-5 (book) (the first plurality of image segments) can be regarded as a first respective link. each of the first plurality of image segments includes a second respective link to a corresponding image from which it was extracted. (Temple, para. [0045], “the metadata may include, but is not limited to, an indication of the image from which the image segment was extracted”). Note that: the metadata of each image segment of the first plurality of image segments (glass, bottle, and table) can be mapped into a second respective link to the corresponding image from which the image segment is extracted. Regarding claim 10, Templein view of Diakopoulos discloses The computing system of claim 6, wherein: the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least determine user information associated with a user associated with the client device; and (Temple, para. [0040], “the image can be from any source such as a camera or other imaging element, from a website, from photos stored in a memory of a user device or stored in memory that is accessible by the user device … in some implementations, the popularity or frequency of extracted image segments used on other collages by the same or other users may be monitored and popular or trending extracted image segments presented to a user for selection and inclusion in the collage”). Note that: the popularity or frequency of extracted image segments used on other collages by the same or other users can be regarded or determined as user information associated with a user and the user device (client device). the first plurality of image segments are determined based at least in part on the user information. Note that: since the user information includes the popularity or frequency of extracted image segments used on other collages by the same or other users, the first plurality of image segments (glass, bottle, and table) can be selected or determined from the extracted image segments that are used by the same or other users. Regarding claim 11, Temple in view of Diakopoulos discloses The computing system of claim 6, wherein the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: receive, in response to presenting of the collage on the client device, a selection of at least one second image segment from the first plurality of image segments; (Temple, para. [0018], “A user may select an image segment and the pixels of the image corresponding to the selected image segment are extracted to generate an extracted image segment”; FIG. 1A: images and image segments (glass, bottle, car, table, chairs, and shoe); FIG. 1B: for bottle “112-2”, there are additional images “124”; FIG. 1E: images and image segments (TV, pickup truck, lamp, sofa, airplane); FIG. 1F: images and segments (lamp and lights)). Note that: after the collage has been generated and presented on the user’s device (the client device), the user can select at least one second image segment (bottle) from the first plurality of image segments (glass, bottle, and table). determine a second plurality of image segments that include representation of a plurality of second objects that are complementary to objects represented in the at least one second image segment; (Temple, para. [0095], “one or more of processing a first image to determine a first image segment that corresponds to less than all of the first image and corresponds to an object represented in the first image, presenting, on a user device, the first image and the first image segment such that the first image segment is visually distinguished from the first image”). Note that: (1) TV, pickup truck, lamp, and lights are mapped into a second plurality of objects while the corresponding image segments are mapped into a second plurality of image segments; and (2) the second plurality of objects (TV, pickup truck, lamp, and lights) are related to life style and complementary to objects represented in the at least one second image segment (bottle). cause the second collage to be presented on the client device. (Temple, FIG. 7A: a client device (e.g., smart phone) showing a collage). Note that: “a collage” can be regarded as the second collage. determine, based at least in part on the at least one second image segment and the second plurality of image segments, a second collage layout that specifies a second arrangement of the at least one second image segment and the second plurality of image segments; (Diakopoulos, page 183, Abstract, “Our layout method utilizes a pre-designed template which consists of cells for photos and annotations applied to these cells. The layout is then filled by matching the metadata of photos to the annotations in the cells using an optimization algorithm”; page 184, col. right, para. 2, “This template consists of an arrangement of empty cells for each photo to be included. Layout diversity is provided to the user through a library of layout templates offering varying arrangements of cell sizes, shapes, and quantities. Either the user can select an existing template, a suitably sized one can be generated automatically for a given quantity of photos”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) a second layout template can be determined or selected from a library of layout templates (the plurality of collage layout templates); and (2) the user can select a layout template that is based in part on a given quantity of photos or a total number of at least one second image segment (bottle) and the second plurality of image segments (TV, pickup truck, lamp, and lights) with an arrangement of empty cells as a second arrangement for each photo (object corresponding image segment) to be included. generate, based at least in part on the second collage layout, a second collage that includes the at least one second image segment and the second plurality of image segments in the second arrangement specified by the second collage layout; and (Diakopoulos, page 183, Abstract, “we present an approach that supports efficient construction of a collage by assisting the user with an automatic layout procedure that can be controlled at a high level; page 186, col. right, Figure 3: “A collage showing a trip through Ireland”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) the second collage is generated or constructed with the collage layout and at least one second image segment (bottle) and the second plurality of image segments (TV, pickup truck, lamp, and lights) in the second arrangement specified by the second collage layout; and (2) the image segments can be arranged by the layout template. The motivation to combine Temple and Diakopoulos given in claim 6 is incorporated here. Regarding claim 12, Temple in view of Diakopoulos discloses The computing system of claim 6, wherein the first plurality of image segments includes: a first image segmented extracted from a first image and includes a representation of an object of interest; and (Temple, para. [0095], “one or more of processing a first image to determine a first image segment that corresponds to less than all of the first image and corresponds to an object represented in the first image, presenting, on a user device, the first image and the first image segment such that the first image segment is visually distinguished from the first image”; FIG. 1A: a first image “112” with a bottle as an object of interest; FIG. 1B: “112-2” of the bottle as a first image segment that is extracted from image “112”). Note that: (1) the object (bottle) can be regarded as an object of interest while a corresponding first image segment (bottle) is distinguished; and (2) the first image segment (bottle) is determined or extracted by processing a first image (a first image “112” with a bottle). a second plurality of image segments that are extracted from a first plurality of images and include representations of a first plurality of objects that are complementary to the object of interest. (Temple, para. [0023], “In some implementations, additional images 124, image segments, and/or extracted image segments, such as images/extracted image segments that are visually similar to the image segment 112-2 may also be presented on the user interface of the device 100 in response to a user selection of an image 112. For example, in some implementations, the popularity or frequency of extracted image segments used on other collages by the same or other users may be monitored and popular or trending extracted image segments presented to the user as additional images 124”; FIG. 1A: images and image segments (glass, bottle, car, table, chairs, and shoe); FIG. 1B: for bottle “112-2”, there are additional images “124”; FIG. 1E: images and image segments (TV, pickup truck, lamp, sofa, airplane); FIG. 1F: images and segments (lamp and lights)). Note that: (1) the images (glass, bottle, car, table, chairs, and shoe) in FIG. 1A, the additional images 124 in FIG. 1B, the images with TV and lamp in FIG. 1E, the additional images (lamps and lights) in FIG. 1F, can be mapped into a first plurality of images; (2) the objects in the first plurality of images (glass, bottle, car, table, chairs, shoe, TV, lamp, and lights) can be mapped into a second plurality of objects that are complementary to the object of interest (bottle presented by the image segment 112-2) corresponding to the user’s same life style; and (3) the image segments (glass, bottle, car, table, chairs, shoe, TV, lamp, and lights) representing the objects, as a second plurality of image segments, can be extracted or determined from the first plurality of the images (the objects in FIG. 1A and “124” in FIG. 1B) using the processing in para. [0023] of Temple above. Regarding claim 13, Temple and Diakopoulos discloses The computing system of claim 12, wherein the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: receive, in response to presenting of the collage on the client device, a selection of the first image segment; (Temple, para. [0018], “A user may select an image segment and the pixels of the image corresponding to the selected image segment are extracted to generate an extracted image segment”; FIG. 1A: images and image segments (glass, bottle, car, table, chairs, and shoe); FIG. 1B: for bottle “112-2”, there are additional images “124”; FIG. 1E: images and image segments (TV, pickup truck, lamp, sofa, airplane); FIG. 1F: images and segments (lamp and lights)). Note that: after the collage has been generated and presented on the user’s device (the client device), the user can select the first image segment (bottle). determine, based at least in part on the object of interest, a third plurality of image segments that include representations of objects that are complementary to the object of interest, wherein the third plurality of object segments were not included in the second plurality of image segments; and (Temple, para. [0095], “one or more of processing a first image to determine a first image segment that corresponds to less than all of the first image and corresponds to an object represented in the first image, presenting, on a user device, the first image and the first image segment such that the first image segment is visually distinguished from the first image”). Note that: (1) pickup truck, sofa, and airplane in FIG. 1E above are mapped into a third plurality of objects that are related to life style and complementary to the object of interest (bottle); and (2) the second plurality of object segments (i.e., pickup truck, sofa, and airplane) were not included in the first plurality of image segments (i.e., glass, bottle, car, table, chairs, shoe, TV, lamp, and lights). cause the second collage to be presented on the client device. (Temple, FIG. 7A: a client device (e.g., smart phone) showing a collage). Note that: “a collage” can be regarded as the second collage. determine, based at least in part on the object of interest and the third plurality of image segments, a second collage layout that specifies a second arrangement of the first image segment and the third plurality of image segments; (Diakopoulos, page 183, Abstract, “Our layout method utilizes a pre-designed template which consists of cells for photos and annotations applied to these cells. The layout is then filled by matching the metadata of photos to the annotations in the cells using an optimization algorithm”; page 184, col. right, para. 2, “This template consists of an arrangement of empty cells for each photo to be included. Layout diversity is provided to the user through a library of layout templates offering varying arrangements of cell sizes, shapes, and quantities. Either the user can select an existing template, a suitably sized one can be generated automatically for a given quantity of photos”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) a second layout template can be determined or selected from a library of layout templates (the plurality of collage layout templates); and (2) the user can select a layout template that is based in part on a given quantity of photos or a total number of the object of interest (bottle) and the third plurality of image segments (i.e., pickup truck, sofa, and airplane) with an arrangement of empty cells as a second arrangement for each photo (object corresponding image segment) to be included to form a collage as a second collage. generate, based at least in part on the second collage layout, a second collage that includes the first image segment and the third plurality of image segments in the second arrangement specified by the second collage layout; and (Diakopoulos, page 183, Abstract, “we present an approach that supports efficient construction of a collage by assisting the user with an automatic layout procedure that can be controlled at a high level; page 186, col. right, Figure 3: “A collage showing a trip through Ireland”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) the second collage is generated or constructed with the collage layout and at least the first image segment (lamp) and the third plurality of image segments in the second arrangement specified by the second collage layout; and (2) the image segments can be arranged by the second collage layout. The motivation to combine Temple and Diakopoulos given in claim 6 is incorporated here. Claim 15 is corresponding to the method of claim 4. Therefore, claim 15 is rejected for the same rationale for claim 4. Regarding claim 16, Temple in view of Diakopoulos discloses A method, comprising: (Temple, Abstract, “Described are systems and methods”). receiving, from a client device associated with a user, an indication of a first image segment that is extracted from a first image and includes a representation of an object of interest; (Temple, para. [0095], “one or more of processing a first image to determine a first image segment that corresponds to less than all of the first image and corresponds to an object represented in the first image, presenting, on a user device, the first image and the first image segment such that the first image segment is visually distinguished from the first image”; FIG. 1A: a first image “112” with a bottle as an object of interest as shown on the user’s smart phone; FIG. 1B: “112-2” of the bottle as a first image segment that is extracted from image “112”). Note that: (1) the object (bottle) can be regarded as an object of interest while a corresponding first image segment (bottle) is distinguished; and (2) the first image segment (bottle) is determined or extracted by processing a first image (a first image “112” with a bottle). determining, based at least in part on the object of interest and user information associated with the user, a first plurality of image segments that include representations of objects that are complementary to the object of interest; (Temple, para. [0023], “In some implementations, additional images 124, image segments, and/or extracted image segments, such as images/extracted image segments that are visually similar to the image segment 112-2 may also be presented on the user interface of the device 100 in response to a user selection of an image 112. For example, in some implementations, the popularity or frequency of extracted image segments used on other collages by the same or other users may be monitored and popular or trending extracted image segments presented to the user as additional images 124”; para. [0095], “one or more of processing a first image to determine a first image segment that corresponds to less than all of the first image and corresponds to an object represented in the first image, presenting, on a user device, the first image and the first image segment such that the first image segment is visually distinguished from the first image”; FIG. 1A: images and image segments (glass, bottle, car, table, chairs, and shoe); FIG. 1B: for bottle “112-2”, there are additional images “124”; FIG. 1E: images and image segments (TV, pickup truck, lamp, sofa, airplane); FIG. 1F: images and segments (lamp and lights)para. [0040], “the image can be from any source such as a camera or other imaging element, from a website, from photos stored in a memory of a user device or stored in memory that is accessible by the user device … in some implementations, the popularity or frequency of extracted image segments used on other collages by the same or other users may be monitored and popular or trending extracted image segments presented to a user for selection and inclusion in the collage”). Note that: (1) the images with glass, bottle car, table, chairs, and shoe in FIG. 1A, the additional images 124 in FIG. 1B, the images with TV and lamp in FIG. 1E, the additional images (lamps and lights) in FIG. 1F, can be mapped into a first plurality of images; (2) a first plurality of image segments in the first plurality of images (glass, bottle, car, table, chairs, shoe, TV, lamp, and lights) can be extracted or determined by processing the first plurality of images to segment the corresponding objects that are complementary to the object of interest (bottle presented by the image segment 112-2) for the user’s same life style; (3) the image segments (glass, bottle, car, table, chairs, shoe, TV, lamp, and lights) representing the objects, as a first plurality of image segments, can be extracted or determined from the first plurality of the images (the objects in FIG. 1A and “124” in FIG. 1B) using the processing in para. [0023] of Temple above based on the object of interest as complimentary items; and (4) the popularity or frequency of extracted image segments used on other collages by the same or other users can be regarded or determined as user information associated with a user and the user device (client device). Therefore, user information associated with the user can be used to determine the extracted image segments as well as the object of interest (bottle). causing the collage to be presented on the client device. (Temple, FIG. 7A: a client device (e.g., smart phone) showing a collage). Note that: “a collage” can be regarded as the collage to be presented. determining, based at least in part on the first image segment and the first plurality of image segments, a collage layout that specifies an arrangement of the first image segment and the first plurality of image segments; (Diakopoulos, page 183, Abstract, “Our layout method utilizes a pre-designed template which consists of cells for photos and annotations applied to these cells. The layout is then filled by matching the metadata of photos to the annotations in the cells using an optimization algorithm”; page 184, col. right, para. 2, “This template consists of an arrangement of empty cells for each photo to be included. Layout diversity is provided to the user through a library of layout templates offering varying arrangements of cell sizes, shapes, and quantities. Either the user can select an existing template, a suitably sized one can be generated automatically for a given quantity of photos”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) the layout template can be determined or selected from a library of layout templates (a plurality of collage layout templates); and (2) the user can select a layout template that is based on a given quantity of photos or a total number of objects (the object of interest and the first plurality of objects) with an arrangement of empty cells for each photo (object corresponding image segment) to be included. generate, based at least in part on the collage layout, a collage that includes the first image segment and the first plurality of image segments in the arrangement specified by the collage layout; and (Diakopoulos, page 183, Abstract, “we present an approach that supports efficient construction of a collage by assisting the user with an automatic layout procedure that can be controlled at a high level; page 186, col. right, Figure 3: “A collage showing a trip through Ireland”; page 184, col. right, Figure 1: “A sample layout template indicating photo cells and associated annotations”). Note that: (1) the collage is generated or constructed with the collage layout and the image segments (the first image segment and the first plurality of image segments); and (2) the image segments can be arranged by the layout template. The motivation to combine Temple and Diakopoulos given in claim 1 is incorporated here. Regarding claim 18, Temple in view of Diakopoulos discloses The method of claim 16, wherein the first plurality of image segments are further determined as least based in part on a popularity of the first image segments. (Temple, para. [0040], “in some implementations, the popularity or frequency of extracted image segments used on other collages by the same or other users may be monitored and popular or trending extracted image segments presented to a user for selection and inclusion in the collage”). Note that: a user can select or determine extracted image segments based on the popularity or frequency of extracted image segments to be included in the collage. Regarding claim 20, Temple in view of Diakopoulos discloses The method of claim 16, wherein the arrangement specifies at least one of: a relative positioning of each of the first image segment and the first plurality image segments in three dimensions; a background color; or a design element. (Diakopoulos, page 184, col. right, para. 2, “This template consists of an arrangement of empty cells for each photo to be included. Layout diversity is provided to the user through a library of layout templates offering varying arrangements of cell sizes, shapes, and quantities. Either the user can select an existing template, a suitably sized one can be generated automatically for a given quantity of photos, or the user may design one using a simple WYSIWYG interface. The interface allows the user to directly specify the size, position, and z-order of layout cells”). Note that: The user can use using a simple WYSIWYG interface to design arrangement by directly specify the position of layout cells (relative positioning of each of the first image segment and the first plurality image segments in three dimensions). The motivation to combine Temple and Diakopoulos given in claim 16 is incorporated here. Claim 5 is rejected under 35 U.S.C. 103 as being unpatentable over Temple in view of Diakopoulos and Isaacson et al. (US 9449338 B2, hereinafter “Isaacson”). Regarding claim 5, Temple in view of Diakopoulos discloses The computer-implemented method of claim 4, However, Temple in view of Diakopoulos fails to discloses, but in the same art of e-commerce technology, Isaacson disclose wherein the secondary query input includes a text-based query. (Isaacson, col. 61, lines 8-11, “receiving a text-based query in the input field; correlating the text-based query against a product database of products for sale from merchants to yield a correlation; determining, via a processor and based at least in part on the correlation, that the text-based query is associated with one of a search intent and a purchase intent to yield a determination”). Note that: the secondary query input can include a text-based query in the input field. Temple in view of Diakopoulos, and Isaacson, are in the same field of endeavor, namely e-commerce technology. Before the effective filing date of the claimed invention, it would have been obvious to apply a text-based query, as taught by Isaacson into Temple in view of Diakopoulos. The motivation would have been “receiving a text-based query in the input field; correlating the text-based query against a product database of products for sale from merchants to yield a correlation; determining, via a processor and based at least in part on the correlation, that the text-based query is associated with one of a search intent and a purchase intent to yield a determination” (Isaacson, col. 61, lines 8-11). The suggestion for doing so would allow to enable a text-based query and efficiently generate a collage. Therefore, it would have been obvious to combine Temple, Diakopoulos, and Isaacson. Claims 9 and 17 are rejected under 35 U.S.C. 103 as being unpatentable over Temple in view of Diakopoulos and Tran et al. (US 20160328868 A1, hereinafter “Tran”). Regarding claim 9, Temple in view of Diakopoulos discloses The computing system of claim 6, wherein: the program instructions that, when executed by the one or more processors, further cause the one or more processors to at least: prior to generation of the collage: determine a first object type for each of the first plurality of objects represented in the first plurality of image segments; and (Temple, para. [0069], “For example, any of a plurality of image processing algorithms or DNNs may be utilized to process an image and detect an object, or an object type represented in the image”). Note that: DNNs may be used to determine object type for each of the first plurality of objects represented in the first plurality of image segments prior to generation of the collage. However, Temple in view of Diakopoulos fails to disclose, but in the same art of computer graphics, Tran discloses determine a collage category of the collage to be generated; (Tran, page 7, para, [0078], “In this example, it can be determined that January 1 is the birthday of a user associated with media content items in the collage 404, and therefore the theme can correspond to birthdays”). Note that: a collage theme (birthdays) is regarded as a collage category and is determined based on a user’s birthday. the collage layout is determined based at least in part on at least one of the collage category, or the first object types. (Tran, page 8, para. [0086]), “a particular virtual overlaying template 406 can be selected based on acquired contextual information associated with the media content items in the collage 404”; page 7, para. [0078], “The particular virtual overlaying template 406 can thus be selected based on its relevancy to birthdays. Additionally, in this example, the particular template 406 can also include a visible element associated with the theme, such as a “Happy Birthday!” sticker 408”). Note that: a particular virtual overlaying template 406 can be regarded as a collage layout and can be determined or selected based on acquired contextual information or relevancy to birthdays (collage category or theme). Temple in view of Diakopoulos, and Tran, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply determining collage category and collage layout, as taught by Tran into Temple in view of Diakopoulos. The motivation would have been “a particular virtual overlaying template 406 can be selected based on acquired contextual information associated with the media content items in the collage 404” (Tran, page 8, para. [0086]). The suggestion for doing so would allow to determine a collage category and further select a content relevant collage layout for efficient generation of a collage. Therefore, it would have been obvious to combine Temple, Diakopoulos, and Tran. Regarding claim 17, the combination of Temple, Diakopoulos, and Tran discloses The method of claim 16, further comprising: prior to generation of the collage: determining a first object type of the object of interest; and determining a second object type for each of the first plurality of objects represented in the first plurality of image segments; (Temple, para. [0069], “For example, any of a plurality of image processing algorithms or DNNs may be utilized to process an image and detect an object, or an object type represented in the image”). Note that: (1) DNNs can be used to determine the object type (a second object type) for each of the first plurality of objects represented in the first plurality of image segments prior to generation of the collage. (2) the same DNN can be used to determine the object type (a first object) for the object of interest. determining a collage category of the collage to be generated; (Tran, page 7, para, [0078], “In this example, it can be determined that January 1 is the birthday of a user associated with media content items in the collage 404, and therefore the theme can correspond to birthdays”). Note that: a collage theme (birthdays) is regarded as a collage category and is determined based on a user’s birthday. wherein the collage layout is determined based at least in part on at least one of the collage category, the first object type, or the second object types. (Tran, page 8, para. [0086]), “a particular virtual overlaying template 406 can be selected based on acquired contextual information associated with the media content items in the collage 404”; page 7, para. [0078], “The particular virtual overlaying template 406 can thus be selected based on its relevancy to birthdays. Additionally, in this example, the particular template 406 can also include a visible element associated with the theme, such as a “Happy Birthday!” sticker 408”). Note that: a particular virtual overlaying template 406 can be regarded as a collage layout and can be determined or selected based on acquired contextual information or relevancy to birthdays (collage category or theme). The motivation to combine Temple, Diakopoulos, and Tran given in claim 9 is incorporated here. Claim 14 is rejected under 35 U.S.C. 103 as being unpatentable over Temple in view of Diakopoulos and Rudnick et al. (US 20240428304 A1, hereinafter “Rudnick”). Regarding claim 14, Temple in view of Diakopoulos discloses The computing system of claim 12, wherein However, Temple in view Diakopoulos fails to disclose, but in the same art of computer graphics, Rudnick discloses the first plurality of images are included in a catalog associated with a brand. (Rudnick, page 4, para. [0034], “Catalog data may include a name associated with a catalog (e.g., a retailer or a brand), an icon associated with the catalog (e.g., a logo), campaigns associated with the catalog, content items including advertisements, coupons, promotions, recipes, images (e.g., photographs), videos, associated with the catalog, or any other suitable types of information associated with a catalog”). Note that: a catalog can include the images (the first plurality of images) associated with a brand. Temple in view of Diakopoulos, and Rudnick, are in the same field of endeavor, namely e-commerce technology. Before the effective filing date of the claimed invention, it would have been obvious to apply images included in a catalog associated with a brand, as taught by Rudnick into Temple in view of Diakopoulos. The motivation would have been “Catalog data may include a name associated with a catalog (e.g., a retailer or a brand), an icon associated with the catalog (e.g., a logo), campaigns associated with the catalog, content items including advertisements, coupons, promotions, recipes, images (e.g., photographs), videos, associated with the catalog, or any other suitable types of information associated with a catalog” (Rudnick, page 4, para. [0034]). The suggestion for doing so would allow to obtain images included in a catalog associated with a brand. Therefore, it would have been obvious to combine Temple, Diakopoulos, and Rudnick. Claim 19 is rejected under 35 U.S.C. 103 as being unpatentable over Temple in view of Diakopoulos, Rudnick, and Hanigan (US 20160098685 A1, hereinafter “Hanigan”). Regarding claim 19, the combination of Temple, Diakopoulos, and Rudnick discloses The method of claim 16, wherein: and the method further comprises: wherein: each image segment of the first plurality of image segments included in the collage includes a first respective link to a respective object page corresponding to an object represented in each image segment. (Temple, FIG. 7A: the collage 740 includes five extracted image segments 743-1, 743-2, 743-4, 743-5, 743-6, and a typed text input 743-3 of "MY CHRISTMAS LIST"; para. [0072], “The extracted image segments of the collage 7 40 may be processed by the example process 600 and a determination made that extracted image segments 743-1 (bicycle), 743-2 (cowboy hat), and 743-5 (book) correspond to buyable objects. As such, a buyable indication 745-1, 745-2, and 745-3 are presented next to the respective extracted image segment. In this example, the object of a sweater that is represented by the extracted image segment 743-4 may have been previously indicated as buyable and now indicated as purchased, through presentation of the purchased indicator 747”; para. [0076], “In particular, in response to a user selecting the extracted image segment 743-1 of the bicycle, the buyable object detail page 755 is presented that includes additional information about the object represented by the selected extracted image segment, in this example the extracted image segment 743-1”). Note that: (1) Among the extracted image segments in FIF. 7B, 743-1 (bicycle) with can be taken as an example for each of first plurality of image segments included in the collage for explanation purpose only here, in which the object page (bicycle sales information) to an object (bicycle) represented in image segment 743-1 (bicycle) is displayed when a user selects the link to the object (bicycle); (2) 745-2 (hat), 743-4 (sweater) and 745-5 (book) replacing the first plurality of image segments (glass, bottle, car, table, chairs, shoe, TV, lamp, and lights) can be regarded the first plurality of image segments in one example for explanation purpose only here; and (3) the buyable or purchased indicator (745-1, 745-2, 747, or 745-3) for each of 743-1(bicycle) (the first image segment) and 745-2 (hat) / 743-4 (sweater) / 745-5 (book) (the first plurality of image segments) can be regarded as a first respective link. storing the collage (Temple, para. [0085], “a collage management component 1014 that maintains, for example, collages created and/or viewed by the user of the user device, extracted image segments, etc., and/or performs some or all of the implementations discussed herein”; para. [0092], “The data store 1103 can include several separate data table, databases or other data storage mechanisms and media for storing data relating to a particular aspect. For example, the data store 1103 may include digital items (e.g., images) and corresponding metadata (e.g., image segments, popularity, source) about those items. Collage data and/or user information and/or other information may likewise be stored in the data store”). Note that: the data store as an online service stores and maintains collage data as digital content items created and/or viewed by the user of the user device. the first plurality of images are extracted from a first plurality of images that are included in a catalog associated with a brand; (Rudnick, page 4, para. [0034], “Catalog data may include a name associated with a catalog (e.g., a retailer or a brand), an icon associated with the catalog (e.g., a logo), campaigns associated with the catalog, content items including advertisements, coupons, promotions, recipes, images (e.g., photographs), videos, associated with the catalog, or any other suitable types of information associated with a catalog”). Note that: (1) a catalog can include the images (the first plurality of images) associated with a brand; and (2) the first plurality of images can be taken or extracted from a first plurality of images from the catalog. However, the combination of Temple, Diakopoulos, and Rudnick fails to disclose, but in the same art of computer graphics, Hanigan discloses as an advertisement, (Hanigan, page 6, para. [0089], “advertising collage intended to depict individual stories about a product, service, or experience”). Note that: the collage can be an advertising collage as an advertisement. The combination of Temple, Diakopoulos, and Rudnick, and Hanigan, are in the same field of endeavor, namely computer graphics. Before the effective filing date of the claimed invention, it would have been obvious to apply a collage as an advertisement, as taught by Hanigan into the combination of Temple, Diakopoulos, and Rudnick. The motivation would have been “advertising collage intended to depict individual stories about a product, service, or experience” (Hanigan, page 6, para. [0089]). The suggestion for doing so would allow to regard a collage as an advertisement for assets management and manipulations. Therefore, it would have been obvious to combine Temple, Diakopoulos, Rudnick, and Hanigan. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BIAO CHEN whose telephone number is (703)756-1199. The examiner can normally be reached M-F 8am-5pm ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee M Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Biao Chen/ Patent Examiner, Art Unit 2611 /KEE M TUNG/Supervisory Patent Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Jul 03, 2024
Application Filed
Feb 08, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602873
AUTOMATIC RETOPOLOGIZATION OF TEXTURED 3D MESHES
2y 5m to grant Granted Apr 14, 2026
Patent 12597149
APPARATUS, METHOD, AND COMPUTER PROGRAM FOR NETWORK COMMUNICATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12562138
METHOD AND SYSTEM FOR COMPENSATING ANTI-DIZZINESS PREDICTED IN ADVANCE
2y 5m to grant Granted Feb 24, 2026
Patent 12561897
COMPRESSED REPRESENTATIONS FOR APPEARANCE OF FIBER-BASED DIGITAL ASSETS
2y 5m to grant Granted Feb 24, 2026
Patent 12548129
APPARATUSES, METHODS AND COMPUTER PROGRAMMES FOR USE IN MODELLING IMAGES CAPTURED BY ANAMORPHIC LENSES
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+26.3%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 32 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month