Prosecution Insights
Last updated: April 19, 2026
Application No. 18/736,874

ELECTRONIC DEVICE, METHOD, AND NON-TRANSITORY COMPUTER-READABLE MEDIUM FOR IMAGE EDITING

Non-Final OA §102§103
Filed
Jun 07, 2024
Examiner
MA, MICHELLE HAU
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
1 (Non-Final)
81%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
99%
With Interview

Examiner Intelligence

Grants 81% — above average
81%
Career Allow Rate
17 granted / 21 resolved
+19.0% vs TC avg
Strong +36% interview lift
Without
With
+36.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
35 currently pending
Career history
56
Total Applications
across all art units

Statute-Specific Performance

§101
3.0%
-37.0% vs TC avg
§103
84.2%
+44.2% vs TC avg
§102
6.4%
-33.6% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 21 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Specification The disclosure is objected to because of the following informalities: In paragraph 0089 line 9, “Furter” should read “Further”. In paragraph 0101 line 4, “mage” should read “image”. In paragraph 0107 line 8, “one continuously input” should read “one continuous input”. In paragraph 0114 line 5, “UE” should read “UI”. In paragraph 0177 lines 10-11, “moves the object in the virtual object” should read “moves the object in the virtual space”. In paragraph 0179 line 1, “a wearer views is similar” should read “a wearer view is similar”. In paragraph 0179 line 3, “the wears views” should read “the wearer view”. Appropriate correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-3, 5, and 11 are rejected under 35 U.S.C. 102(a)(1) and 35 U.S.C. 102(a)(2) as being anticipated by Zhang et al. (US 20230325996 A1), hereinafter Zhang. Regarding claim 1, Zhang teaches a method of editing an image (Paragraph 0141 – “the object recommendation system 106 utilizes the graphical user interface to implement a workflow for providing foreground object image recommendations and composite images”), the method comprising: receiving a first user input (user interaction with the selectable option [Wingdings font/0xE0] see quote) for a first image (background image [Wingdings font/0xE0] see quote) (Paragraph 0145 – “the object recommendation system 106 also provides a selectable option 714 for providing an indication to search for a foreground object image for use in generating the composite image with the background image 710. Indeed, in one or more embodiments, in response to a user interaction with the selectable option 714, the object recommendation system 106 receives an indication to search for one or more foreground object images that are compatible with the background image 710”; Note: the background image is equivalent to the first image, and the user interaction with the selectable option is equivalent to the first user input); determining whether the first user input indicates an instruction to add a first object (foreground object [Wingdings font/0xE0] see quote) (Paragraph 0145 – “the object recommendation system 106 also provides a selectable option 714 for providing an indication to search for a foreground object image for use in generating the composite image with the background image 710. Indeed, in one or more embodiments, in response to a user interaction with the selectable option 714, the object recommendation system 106 receives an indication to search for one or more foreground object images that are compatible with the background image 710”; Note: the foreground object is equivalent to the first object); when the first user input indicates the instruction to add the first object, generating a first preliminary object image (foreground object image [Wingdings font/0xE0] see quote) for the first object (Paragraph 0059, 0145, 0147 – “the object recommendation system 106 on the server(s) 102 utilizes the one or more search engines to generate a recommendation for utilizing a foreground object image with the background image in generating a composite image…in response to a user interaction with the selectable option 714, the object recommendation system 106 receives an indication to search for one or more foreground object images that are compatible with the background image 710. Accordingly, in some embodiments, in response to detecting a user interaction with the selectable option 714, the object recommendation system 106 identifies one or more foreground object images to recommend in response… the object recommendation system 106 provides a recommendation for display within the graphical user interface 702”; Note: the foreground object image is equivalent to the first preliminary object image); displaying a second image (composite image [Wingdings font/0xE0] see quote) including the first object, the second image being generated based on the first preliminary object image and being associated with the first image (Paragraph 0150 – “upon a selection of the foreground object image 722, the object recommendation system 106 generates and provides a composite image 724 that combines the foreground object image 722 with the background image 710”; Note: the composite image is equivalent to the second image including the first object (foreground object)); and when a second user input (user interaction indicating a position or scaling [Wingdings font/0xE0] see quote) indicating an instruction to alter the first object is received for the second image, displaying a third image (composite image with changed positioning or scaling [Wingdings font/0xE0] see quote and note) in which at least one of a size or a location of the first object is changed according to the second user input (Paragraph 0042, 0230 – “the object recommendation system generates the composite image by positioning and or scaling the foreground object image in accordance with additional user selections received via the graphical user interface. Indeed, in some cases, the object recommendation system receives a user interaction indicating a positioning and or a scaling for the foreground object image within the composite image… the object recommendation system 106 receives a selection of the selectable option 1804a via the graphical user interface 1800. In response to the selection, the object recommendation system 106 utilizes the auto-composite model to adjust the size of the foreground object image within the composite image 1802. In particular, the object recommendation system 106 executes a scale prediction model of the auto-composite model to modify the scale of the foreground object image within the composite image 1802 based on a scale of the background image 1806”; Note: the user interaction indicating a position or scaling is equivalent to the second user input. The composite image with changed positioning or scaling is equivalent to the third image). Regarding claim 2, Zhang teaches the method of claim 1. Zhang further teaches wherein the determining whether the first user input indicates the instruction to add the first object comprises identifying the first object, based on a drawn shape (sketch input [Wingdings font/0xE0] see quote) in the first user input (Fig. 19, Paragraph 0184-0186– “the object recommendation system 106 provides selectable options 1304a-1304b for executing a search for one or more foreground object images via a composite object search engine. In particular, as shown, the object recommendation system 106 provides the selectable option 1304a for implementing a composite-aware search and the selectable option 1304b for implementing a sketch-based search… a sketch-based search includes a search for one or more foreground object images that match a sketch input… by searching for one or more foreground object images based on a size and object class indicated by a sketch input”; Note: a foreground object, which is the first object, can be added by searching for foreground object images based on a user drawing/shape. 1906 in Fig. 19 shows an example of a drawn shape of a user input, which is identified to be an airplane; see screenshot of Fig. 19 below). PNG media_image1.png 444 679 media_image1.png Greyscale Screenshot of Fig. 19 (taken from Zhang) Regarding claim 3, Zhang teaches the method of claim 1. Zhang further teaches wherein the first preliminary object image (foreground object image [Wingdings font/0xE0] see quote) is generated based on image information (embeddings [Wingdings font/0xE0] see quote) of an object which is included in another image (potential foreground object images [Wingdings font/0xE0] see quote and note) and is an image type (object class [Wingdings font/0xE0] see quote) similar to the first object (Paragraph 0059, 0237-0238 – “the object recommendation system 106 on the server(s) 102 utilizes the one or more search engines to generate a recommendation for utilizing a foreground object image with the background image in generating a composite image…the object recommendation system 106 searches for and retrieves one or more foreground object images utilizing the corresponding search engine(s). In particular, as mentioned, the object recommendation system 106 executes a search via an image search engine using sketch input… the object recommendation system 106 determines an object class of the sketch input 1906 and utilizes the object class in narrowing the search. For instance, in some cases, the object recommendation system 106 determines the object class via a classification neural network. In some implementations, however, the image search engine searches for results corresponding to the sketch input 1906 without explicitly determining the object class (e.g., using embeddings that implicitly encode the object class or object features such as shape, color, etc.)”; Note: the output foreground object image (first preliminary object image) is generated based on the object class (object type) and/or embeddings (image information) of various potential foreground object images). Regarding claim 5, Zhang teaches the method of claim 1. Zhang further teaches wherein the displaying the third image comprises displaying a lighting effect based on a change in at least one of the size or the location of the first object (Fig. 18A-18D, Paragraph 0182 – “the object recommendation system 106 re-sizes the foreground object image 1210 within the composite image 1214 so the scale of the foreground object image 1210 matches of a scale of the background image 1202. Additionally, as shown, the object recommendation system 106 modifies a lighting of the foreground object image 1210 within the composite image 1214 based on a lighting of the background image 1202. Further, the object recommendation system 106 generates a shadow for the foreground object image 1210 within the composite image 1214. In particular, the object recommendation system 106 generates a shadow in accordance with the lighting conditions of the background image 1202”; Note: generating a shadow is equivalent to the lighting effect, and the change in scale is equivalent to a change in size. Fig. 18A-18D show how the object’s size is increased and when a shadow is added, the shadow corresponds to the increased size of the object). Regarding claim 11, Zhang teaches the method of claim 1. Zhang further teaches wherein the determining whether the first user input indicates the instruction to add the first object comprises determining the first user input based on a type of an input scheme of the first user input (Paragraph 0184, 0187 – “the object recommendation system 106 provides selectable options 1304a-1304b for executing a search for one or more foreground object images via a composite object search engine. In particular, as shown, the object recommendation system 106 provides the selectable option 1304a for implementing a composite-aware search and the selectable option 1304b for implementing a sketch-based search. In other words, based on a user selection of the one of the selectable options 1304a-1304b, the object recommendation system 106 executes the corresponding search… In some cases, the object recommendation system 106 executes a search based on text input received via the text box 1306 alone or in combination with other search input (e.g., a user selection of the selectable option 1304a for a composite-aware search and/or a selection of a portion of the background image to be used).”; Note: the user input is determined based on the type of input scheme, such as searching by text, sketch, automation, etc.). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 4, 6-10, and 12-15 are rejected under 35 U.S.C. 103 as being unpatentable over Zhang in view of Ding et al. (US 20240135561 A1), hereinafter Ding. Regarding claim 4, Zhang teaches the method of claim 1. Zhang further teaches wherein: the first preliminary object image comprises a first part of the first object and a second part of the first object (Fig. 18A – The figure shows that the first preliminary object image comprises the whole first object, including a first and second part; see modified screenshot of Fig. 18A below). Zhang does not teach displaying the second image comprises displaying the first part of the first object and abstaining from displaying the second part of the first object; and the displaying the third image comprises displaying the first part of the first object and the second part of the first object. However, Ding teaches displaying the second image comprises displaying the first part of the first object and abstaining from displaying the second part of the first object (Fig. 45A-45C, Paragraph 0558 – “as shown in FIG. 45A, the first object 4508a is only partially displayed within the digital image 4506. For instance, in some cases, when the digital image 4506 was captured, the top portion of the first object 4508a was within frame (and captured as part of the digital image 4506) while the bottom portion of the first object 4508a was not within frame”; Note: Fig. 45A shows an image where the first object 4508a has a first part that is displayed (upper body) and a second part (lower body) that is not displayed; see screenshot of Fig. 45A-45C below); and the displaying the third image comprises displaying the first part of the first object and the second part of the first object (Fig. 45A-45C, Paragraph 0559 – “As shown in FIGS. 45B-45C, however, the scene-based image editing system 106 provides the first object 4508a for display within its entirety within the digital image 4506”; Note: Fig. 45B and 45C show an image where the first object 4508a has a first part that is displayed (upper body) and a second part (lower body) that is also displayed; see screenshot of Fig. 45A-45C below). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang to incorporate the teachings of Ding to display an image with the entire object when the original image only shows part of the object for the benefit of allowing the user to have a better view of the object. For example, in the case where the object is the main subject but is not properly centered or arranged, then moving the object so that all parts of it are displayed would help showcase it better. In Zhang, there is an option for the user to select where the object should be added to the image (Paragraph 0221 – “the object recommendation system 106 receives spot input 1608 via the user interaction with the background image 1606. In one or more embodiments, the object recommendation system 106 utilizes the spot input 1608 as an indication of location for the foreground object image within the resulting composite image”), meaning they can add the object anywhere on the image, even in locations where part of the object might be cut off. Having additional options for the user would help them in cases where they changed their mind about the initial selection and want to move the object to a place for better view. PNG media_image2.png 539 752 media_image2.png Greyscale Modified screenshot of Fig. 18A (taken from Zhang) PNG media_image3.png 575 883 media_image3.png Greyscale Screenshot of Fig. 45A-45C (taken from Ding) Regarding claim 6, Zhang in view of Ding teaches the method of claim 4. Zhang further teaches generating a second preliminary object image of a second object (Fig. 20B, Paragraph 0241 – “the object recommendation system 106 generates a second composite image 2008 using the first composite image 2002 and a second foreground object image 2010. In particular, as shown, the object recommendation system 106 generates the second composite image 2008 in accordance with a bounding box input 2012 received within the first composite image 2002”; Note: the second foreground object image is equivalent to the second preliminary object image, and the object in the second foreground object image is equivalent to the second object; see modified screenshot of Fig. 20B below), wherein the second preliminary object image comprises a first part of the second object and a second part of the second object (Fig. 20B – The figure shows the second preliminary object image comprising the entire second object, which in this case, is a person; see modified screenshot of Fig. 20B below), and the displaying the second image further comprises displaying the second object generated based on the second preliminary object image (Fig. 20B, Paragraph 0241 – “the object recommendation system 106 generates a second composite image 2008 using the first composite image 2002 and a second foreground object image 2010. In particular, as shown, the object recommendation system 106 generates the second composite image 2008 in accordance with a bounding box input 2012 received within the first composite image 2002”; Note: the second composite image 2008, which is the second image in this case, comprises the second object displayed based on the second preliminary object image; see modified screenshot of Fig. 20B below). Zhang does not directly teach identifying a second object included in the first image and generating a second preliminary object image for a second object. Instead, Zhang does teach identifying a first object included in the first image (Fig. 19, Paragraph 0184-0186– “the object recommendation system 106 provides selectable options 1304a-1304b for executing a search for one or more foreground object images via a composite object search engine. In particular, as shown, the object recommendation system 106 provides the selectable option 1304a for implementing a composite-aware search and the selectable option 1304b for implementing a sketch-based search… a sketch-based search includes a search for one or more foreground object images that match a sketch input… by searching for one or more foreground object images based on a size and object class indicated by a sketch input”; Note: the user sketch, which is the first object, in the background image is identified as a foreground object. 1906 in Fig. 19 shows an example of a drawn shape of a user input, which is identified to be an airplane; see screenshot of Fig. 19 above) and generating a first preliminary object image for a first object (Paragraph 0059, 0185-0186 – “the object recommendation system 106 on the server(s) 102 utilizes the one or more search engines to generate a recommendation for utilizing a foreground object image with the background image in generating a composite image…a sketch-based search includes a search for one or more foreground object images that match a sketch input… by searching for one or more foreground object images based on a size and object class indicated by a sketch input”; Note: a foreground object image is equivalent to the first preliminary object image). A person of ordinary skill in the art before the effective filing date of the claimed invention would have recognized that the first object of Zhang could have been substituted for the second object because both the first and second object serve the purpose of being added to an image. Furthermore, a person of ordinary skill in the art would have been able to carry out the substitution. Finally, the substitution achieves the predictable result of creating an image with the object added to it. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to substitute the first object for the second object according to known methods to yield the predictable result of having the object be in an image. In other words, if the process could be done once, on a first object, it would be obvious to do it again on a second object, for cases where the user wants to add additional objects to the image. PNG media_image4.png 448 710 media_image4.png Greyscale Modified screenshot of Fig. 20B (taken from Zhang) Regarding claim 7, Zhang in view of Ding teaches the method of claim 6. Zhang does not teach wherein the displaying the third image comprises: when the location of the first object is changed to be behind the second object, acquiring depth information of the first object that is larger than depth information of the second object and, when the location of the first object is changed to be in front of the second object, acquiring depth information of the first object that is smaller than depth information of the second object; and displaying the first object and the second object based on the depth information of the first object and the depth information of the second object. However, Ding teaches when the location of the first object is changed to be behind the second object, acquiring depth information of the first object that is larger than depth information of the second object (Fig. 48C, Paragraph 0596 – “the scene-based image editing system 106 moves the third object 4808c farther to create an overlap area between the first object 4808a and the third object 4808c. As further shown, the scene-based image editing system 106 occludes the third object 4808c using the first object 4808a within the overlap area. In particular, the scene-based image editing system 106 determines that the object depth of the third object 4808c is greater than the object depth of the first object 4808a and occludes the third object 4808c within the overlap area accordingly”; Note: in this case, object 4808c is equivalent to the first object of claim 7, and object 4808a is equivalent to the second object of claim 7. Fig. 48C shows how object 4808c is behind object 4808a because object 4808c has a greater depth than object 4808a; see screenshot of Fig. 48A-48C below) and, when the location of the first object is changed to be in front of the second object, acquiring depth information of the first object that is smaller than depth information of the second object (Fig. 48B, Paragraph 0594 – “the scene-based image editing system 106 moves the third object 4808c to create an overlap area with the second object 4808b. As further shown, the scene-based image editing system 106 occludes the second object 4808b using the third object 4808c within the overlap area. In particular, the scene-based image editing system 106 determines that the object depth of the second object 4808b is greater than the object depth of the third object 4808c and occludes the second object 4808b within the overlap area accordingly. In some cases, the scene-based image editing system 106 determines which object has the greater object depth and which object has the lesser object depth upon detecting that the second object 4808b and the third object 4808c are overlapping”; Note: in this case, object 4808c is equivalent to the first object of claim 7, and object 4808b is equivalent to the second object of claim 7. Fig. 48B shows how object 4808c is in front object 4808b because object 4808c has a lower depth than object 4808b; see screenshot of Fig. 48A-48C below); and displaying the first object and the second object based on the depth information of the first object and the depth information of the second object (Fig. 48A-48C, Paragraph 0593 – “FIGS. 48A-48C illustrate another graphical user interface implement by the scene-based image editing system 106 to perform a depth-aware object move operation in accordance with one or more embodiments”; Note: Figures 48A-48C show how objects are displayed based on depth; see screenshot below). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang to incorporate the teachings of Ding to obtain information that an object’s depth is larger than another object when it is behind the latter object, because logically, when a first object is behind a second object, the first object is further away from the viewer than the second object. Being further away also means that it has greater depth. Additionally, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang to incorporate the teachings of Ding to display the objects based on depth because the image would appear more realistic and accurate to human sight and perception if it properly depicted the relative depth of each object. PNG media_image5.png 568 857 media_image5.png Greyscale Screenshot of Fig. 48A-48C (taken from Ding) Regarding claim 8, Zhang in view of Ding teaches the method of claim 6. Zhang does not teach wherein the displaying the third image comprises, when the location of the first object is changed to a location overlapping an area in which the second object is located: displaying the first part of the second object and abstaining from displaying the second part of the second object overlapping the first object when depth information of the first object is smaller than depth information of the second object; and displaying the first part of the first object and abstaining from displaying the second part of the first object overlapping the second object when the depth information of the first object is larger than the depth information of the second object. However, Ding teaches when the location of the first object is changed to a location overlapping an area in which the second object is located (Fig. 48B-48C – The figures show examples of the first object (person 4808c) being moved to overlap another object; see screenshot of the figures above): displaying the first part of the second object and abstaining from displaying the second part of the second object overlapping the first object when depth information of the first object is smaller than depth information of the second object (Fig. 48B, Paragraph 0594 – “the scene-based image editing system 106 moves the third object 4808c to create an overlap area with the second object 4808b. As further shown, the scene-based image editing system 106 occludes the second object 4808b using the third object 4808c within the overlap area. In particular, the scene-based image editing system 106 determines that the object depth of the second object 4808b is greater than the object depth of the third object 4808c and occludes the second object 4808b within the overlap area accordingly. In some cases, the scene-based image editing system 106 determines which object has the greater object depth and which object has the lesser object depth upon detecting that the second object 4808b and the third object 4808c are overlapping”; Note: in this example, object 4808c is equivalent to the first object of claim 8, and object 4808b is equivalent to the second object of claim 8. Fig. 48B shows how the first part (upper portion) of object 4808b is displayed, but the second part (lower portion) is not displayed, due to the depth of object 4808c being smaller; see screenshot of Fig. 48A-48C above); and displaying the first part of the first object and abstaining from displaying the second part of the first object overlapping the second object when the depth information of the first object is larger than the depth information of the second object (Fig. 48C, Paragraph 0596 – “the scene-based image editing system 106 moves the third object 4808c farther to create an overlap area between the first object 4808a and the third object 4808c. As further shown, the scene-based image editing system 106 occludes the third object 4808c using the first object 4808a within the overlap area. In particular, the scene-based image editing system 106 determines that the object depth of the third object 4808c is greater than the object depth of the first object 4808a and occludes the third object 4808c within the overlap area accordingly”; Note: in this example, object 4808c is equivalent to the first object of claim 8, and object 4808a is equivalent to the second object of claim 8. Fig. 48C shows how the first part (upper portion) of object 4808c is displayed, but the second part (lower portion) is not displayed, due to the depth of object 4808c being larger; see screenshot of Fig. 48A-48C above). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang to incorporate the teachings of Ding to not display part of an object when its depth is greater than another object overlapping it and vice versa because logically, in a view where one object is in front of the other (having less depth), part of the back object (having greater depth) will be visually blocked by the front object. Therefore, having that occlusion of the back object is beneficial for creating a realistic view that mimics human sight and perspective. Plus, it would be difficult to see either object if they are overlapping and both fully displayed at the same time. Regarding claim 9, Zhang in view of Ding teaches the method of claim 6. Zhang does not teach wherein: the first image comprises the first part of the second object and the second part of the second object; and the displaying the third image comprises displaying the first part of the second object, based on a relationship between the second object and the first object, without displaying the second part of the second object. However, Ding teaches wherein: the first image comprises the first part of the second object and the second part of the second object (Fig. 48A – The figure shows an image comprising an entire stop sign 4808b, which is the second object in this case); and the displaying the third image comprises displaying the first part of the second object, based on a relationship between the second object and the first object, without displaying the second part of the second object (Fig. 48B, Paragraph 0594 – “the scene-based image editing system 106 moves the third object 4808c to create an overlap area with the second object 4808b. As further shown, the scene-based image editing system 106 occludes the second object 4808b using the third object 4808c within the overlap area. In particular, the scene-based image editing system 106 determines that the object depth of the second object 4808b is greater than the object depth of the third object 4808c and occludes the second object 4808b within the overlap area accordingly”; Note: in this case, object 4808c (person) is equivalent to the first object of claim 9, and object 4808b (stop sign) is equivalent to the second object of claim 9. Fig. 48B shows how the first part (upper portion) of object 4808b is displayed, but the second part (lower portion) is not displayed, due to the depth relationship between object 4808b and 4808c; see screenshot of Fig. 48A-48C above). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang to incorporate the teachings of Ding to display only one part of an object based on its relationship with another object because in the case where the objects are overlapping, realistically, there will be parts of the back object that are occluded due to the overlap. Therefore, having that occlusion is beneficial for creating a realistic view that mimics human sight and perspective. Regarding claim 10, Zhang in view of Ding teaches the method of claim 6. Zhang does not teach wherein: the first image comprises the second object having a first size; and the displaying the second image comprises displaying the second object having a second size different from the first size based on a relationship between the second object and the first object. However, Ding teaches wherein: the first image comprises the second object having a first size (Fig. 45A-45C – Fig. 45A shows an image having a person, which is the second object in this case, having a first size); and the displaying the second image comprises displaying the second object having a second size different from the first size based on a relationship between the second object and the first object (Fig. 45A-45C, Paragraph 0553-0555 – “the scene-based image editing system 106 moves and resizes the first object 4508a via a perspective-aware object move operation… the scene-based image editing system 106 moves the first object 4508a so that the first object 4508a is closer to the vanishing point 4510 than the second object 4508b… the scene-based image editing system 106 maintains a perspective scale attribute for the first object 4508a and updates the value of the perspective scale as the first object 4508a is resized and/or moved within the digital image 4806… based on the object depth of the first object 4508a at its position resulting from the move, the scene-based image editing system 106 determines that the object depth of the first object 4508a is now greater than the object depth of the second object 4508b”; Note: in Fig. 45B and 45C, object 4508a (person/first object) is displayed having a second size different from the size in Fig. 45A based on a depth/perspective scale relationship between the objects in the image). Since Zhang already teaches changing an object’s size (Paragraph 0241 – “the object recommendation system 106 generates the second composite image 2008 using a recommended location and/or a recommended scale for the second foreground object image 2010”), it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang to incorporate the teachings of Ding to display the object with a different size based on its relationship with other objects for the benefit of creating a realistic view and image. For example, when two people are standing next to or near each other in an image, like in Fig. 45B of Ding, their bodies will look about the same size compared to when one is far from the other. In another example, looking at Fig. 20B of Zhang above, if the person was larger than the car, it would not be accurate to a real-life depiction of a person standing on a car, so the person should be resized in that case. Therefore, taking depth, scale, and distance into account when sizing an object in an image will help make the image more realistic. Regarding claim 12, Zhang teaches an electronic device (Paragraph 0243 – “FIG. 21 illustrates the object recommendation system 106 implemented by the computing device 2100 (e.g., the server(s) 102 and/or one of the client devices 110a-110n”) comprising: a display (Paragraph 0244 – “the graphical user interface manager 2102 provides a graphical user interface for display on the client device”); at least one processor (Paragraph 0249 – “the components 2102-2116 can include one or more instructions stored on a computer-readable storage medium and executable by processors of one or more computing devices, such as a client device or server device”); and a memory configured to store instructions executed by the at least one processor (Paragraph 0248-0249 – “the object recommendation system 106 includes data storage 2110. In particular, data storage 2110 (implemented by one or more memory devices…Each of the components 2102-2116 of the object recommendation system 106 can include software, hardware, or both. For example, the components 2102-2116 can include one or more instructions stored on a computer-readable storage medium and executable by processors of one or more computing devices, such as a client device or server device. When executed by the one or more processors, the computer-executable instructions of the object recommendation system 106 can cause the computing device(s) to perform the methods described herein”), wherein the instructions cause the electronic device to: when a first user input (user interaction with the selectable option [Wingdings font/0xE0] see quote) indicating instruction to add a first object (foreground object [Wingdings font/0xE0] see quote) is received for a first image (background image [Wingdings font/0xE0] see quote) displayed on the display (Paragraph 0142, 0145 – “the object recommendation system 106 retrieves and provides a plurality of digital images for display within a search results area 708 the graphical user interface 702 as candidate background images...the object recommendation system 106 also provides a selectable option 714 for providing an indication to search for a foreground object image for use in generating the composite image with the background image 710. Indeed, in one or more embodiments, in response to a user interaction with the selectable option 714, the object recommendation system 106 receives an indication to search for one or more foreground object images that are compatible with the background image 710”; Note: the background image is equivalent to the first image, and the user interaction with the selectable option is equivalent to the first user input), generating a first object image information (foreground object image [Wingdings font/0xE0] see quote) (Paragraph 0059, 0145, 0147 – “the object recommendation system 106 on the server(s) 102 utilizes the one or more search engines to generate a recommendation for utilizing a foreground object image with the background image in generating a composite image…in response to a user interaction with the selectable option 714, the object recommendation system 106 receives an indication to search for one or more foreground object images that are compatible with the background image 710. Accordingly, in some embodiments, in response to detecting a user interaction with the selectable option 714, the object recommendation system 106 identifies one or more foreground object images to recommend in response… the object recommendation system 106 provides a recommendation for display within the graphical user interface 702”; Note: the foreground object image is equivalent to the first object image information) that comprises a first part of the first object and a second part of the first object (Fig. 18A – The figure shows that the first object image information comprises an image of the whole first object, including a first and second part; see modified screenshot of Fig. 18A above); display, on the display, a second image (composite image [Wingdings font/0xE0] see quote) in which the first object is added to the first image, based on the first object image information (Paragraph 0150 – “upon a selection of the foreground object image 722, the object recommendation system 106 generates and provides a composite image 724 that combines the foreground object image 722 with the background image 710”; Note: the composite image is equivalent to the second image including the added first object (foreground object)); and when a second user input (user interaction indicating a position or scaling [Wingdings font/0xE0] see quote) indicating an instruction to alter the first object is received for the second image, display a third image (composite image with changed positioning or scaling [Wingdings font/0xE0] see quote and note) (Paragraph 0042, 0230 – “the object recommendation system generates the composite image by positioning and or scaling the foreground object image in accordance with additional user selections received via the graphical user interface. Indeed, in some cases, the object recommendation system receives a user interaction indicating a positioning and or a scaling for the foreground object image within the composite image… the object recommendation system 106 receives a selection of the selectable option 1804a via the graphical user interface 1800. In response to the selection, the object recommendation system 106 utilizes the auto-composite model to adjust the size of the foreground object image within the composite image 1802. In particular, the object recommendation system 106 executes a scale prediction model of the auto-composite model to modify the scale of the foreground object image within the composite image 1802 based on a scale of the background image 1806”; Note: the user interaction indicating a position or scaling is equivalent to the second user input. The composite image with changed positioning or scaling is equivalent to the third image). Zhang does not teach a second image in which the first part of the first object is added to the first image without adding the second part of the first object to the first image; nor a third image in which the second part of the first object is added to the second image. However, Ding teaches a second image in which the first part of the first object is added to the first image without adding the second part of the first object to the first image (Fig. 45A-45C, Paragraph 0558 – “as shown in FIG. 45A, the first object 4508a is only partially displayed within the digital image 4506. For instance, in some cases, when the digital image 4506 was captured, the top portion of the first object 4508a was within frame (and captured as part of the digital image 4506) while the bottom portion of the first object 4508a was not within frame”; Note: Fig. 45A, which is equivalent to the second image, shows an image where the first object 4508a has a first part that is displayed (upper body) and a second part (lower body) that is not displayed; see screenshot of Fig. 45A-45C above); and a third image in which the second part of the first object is added to the second image (Fig. 45A-45C, Paragraph 0559 – “As shown in FIGS. 45B-45C, however, the scene-based image editing system 106 provides the first object 4508a for display within its entirety within the digital image 4506”; Note: Fig. 45B and 45C, which are equivalent to the third image, show an image where the first object 4508a has a first part that is displayed (upper body) and a second part (lower body) that is also displayed; see screenshot of Fig. 45A-45C above). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to have modified Zhang to incorporate the teachings of Ding to display an image with the entire object when the original image only shows part of the object for the benefit of allowing the user to have a better view of the object. For example, in the case where the object is the main subject but is not properly centered or arranged, then moving the object so that all parts of it are displayed would help showcase it better. In Zhang, there is an option for the user to select where the object should be added to the image (Paragraph 0221 – “the object recommendation system 106 receives spot input 1608 via the user interaction with the background image 1606. In one or more embodiments, the object recommendation system 106 utilizes the spot input 1608 as an indication of location for the foreground object image within the resulting composite image”), meaning they can add the object anywhere on the image, even in locations where part of the object might be cut off. Having additional options for the user would help them in cases where they changed their mind about the initial selection and want to move the object to a place for better view. Regarding claim 13, Zhang in view of Ding teaches the electronic device of claim 12. Zhang further teaches wherein the instructions further cause the electronic device to identify, through an artificial intelligence computing device, attributes of the first object comprising a type or a characteristic of the first object based on a shape configured by the first user input (Paragraph 0083, 0238 – “a geometry-lighting-aware embedding space includes an embedding space for embeddings that encode the lighting and/or geometry features of corresponding digital data (e.g., background images or foreground object images)…the object recommendation system 106 determines an object class of the sketch input 1906 and utilizes the object class in narrowing the search. For instance, in some cases, the object recommendation system 106 determines the object class via a classification neural network. In some implementations, however, the image search engine searches for results corresponding to the sketch input 1906 without explicitly determining the object class (e.g., using embeddings that implicitly encode the object class or object features such as shape, color, etc.)”; Note: the neural network is artificial intelligence. An object class or embeddings are identified for the foreground object (first object) based on the sketch input. An object class is equivalent to a type, and embeddings are equivalent to characteristics). Regarding claim 14, Zhang in view of Ding teaches the electronic device of claim 13. Zhang further teaches wherein: the instructions further cause the electronic device to determine relevance between the attributes of the first object and environment information of the first image (Paragraph 0146 – “the object recommendation system 106 utilizes the neural network to generate an embedding for the background image 710 and embeddings for a plurality of foreground object images within an embedding space (e.g., a geometry-lighting-sensitive embedding space). Further, the object recommendation system 106 determines compatibility based on the embeddings, such as by determining similarity scores between the embeddings for the foreground object images and the embedding for the background image 710”; Note: the compatibility is the relevance between the embeddings/attributes of the foreground object and background. The background is the environment information), and the first object image information comprises information on a new object related to the first object based on the relevance (Paragraph 0237-0238 – “the object recommendation system 106 searches for and retrieves one or more foreground object images utilizing the corresponding search engine(s). In particular, as mentioned, the object recommendation system 106 executes a search via an image search engine using sketch input… the object recommendation system 106 determines an object class of the sketch input 1906 and utilizes the object class in narrowing the search. For instance, in some cases, the object recommendation system 106 determines the object class via a classification neural network. In some implementations, however, the image search engine searches for results corresponding to the sketch input 1906 without explicitly determining the object class (e.g., using embeddings that implicitly encode the object class or object features such as shape, color, etc.)”; Note: the foreground object images, which are the first object image information, comprise embeddings and/or object class of the searched foreground objects, which are the new objects. Examples of the new objects are shown in Fig. 19 1912a-1912d; see screenshot of Fig. 19 above). Regarding claim 15, Zhang in view of Ding teaches the electronic device of claim 14. Zhang further teaches wherein the environment information of the first image comprises at least one piece of place information, time information, or weather information (Paragraph 0074 – “As shown in FIG. 2B, foreground object images recommended by both systems appear to match the semantics of the background image of the query 220 (e.g., the foreground object images include trains that match with the train tracks of the background image)”; Note: the background image is the first image, and the environment information includes information on the place, which in this case are the train tracks). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Park et al. (US 20210073943 A1) teaches a method of generating an image by placing an object on a background image. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE HAU MA whose telephone number is (571)272-2187. The examiner can normally be reached M-Th 7-5:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Poon can be reached at (571) 270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE HAU MA/ Examiner, Art Unit 2617 /KING Y POON/Supervisory Patent Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Jun 07, 2024
Application Filed
Jan 21, 2026
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602750
DIFFERENTIABLE EMULATION OF NON-DIFFERENTIABLE IMAGE PROCESSING FOR ADJUSTABLE AND EXPLAINABLE NON-DESTRUCTIVE IMAGE AND VIDEO EDITING
2y 5m to grant Granted Apr 14, 2026
Patent 12597208
BUILDING INFORMATION MODELING SYSTEMS AND METHODS
2y 5m to grant Granted Apr 07, 2026
Patent 12573217
SERVER, METHOD AND COMPUTER PROGRAM FOR GENERATING SPATIAL MODEL FROM PANORAMIC IMAGE
2y 5m to grant Granted Mar 10, 2026
Patent 12561851
HIGH-RESOLUTION IMAGE GENERATION USING DIFFUSION MODELS
2y 5m to grant Granted Feb 24, 2026
Patent 12536734
Dynamic Foveated Point Cloud Rendering System
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
81%
Grant Probability
99%
With Interview (+36.4%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 21 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month