Prosecution Insights
Last updated: April 19, 2026
Application No. 18/502,375

System and Method for Modifying Search Metrics Based on Features of Interest Determined from Interactions with Images

Final Rejection §103
Filed
Nov 06, 2023
Examiner
PHAM, TUAN A
Art Unit
2163
Tech Center
2100 — Computer Architecture & Software
Assignee
Shopify Inc.
OA Round
4 (Final)
84%
Grant Probability
Favorable
5-6
OA Rounds
2y 11m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
583 granted / 697 resolved
+28.6% vs TC avg
Strong +28% interview lift
Without
With
+27.8%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
32 currently pending
Career history
729
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
47.1%
+7.1% vs TC avg
§102
8.1%
-31.9% vs TC avg
§112
10.4%
-29.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 697 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment This Action is responsive to the Applicant’s Amendment/Remarks filed on 10/29/2025. In the Amendment, applicant amended claims 1-3, 5, 12, 16-20 and 22. As to Arguments and Remarks filed in the Amendment, please see Examiner’s responses shown after Rejections - 35 U.S.C § 103. Please note claims 1-3, 5-9 and 11-22 are pending. Examiner Notes Examiner cites particular columns, paragraphs, figures and line numbers in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested that, in preparing responses, the applicant fully consider the references in their entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1-3, 5, 7-9, 11-22 are rejected under 35 U.S.C. 103 as being unpatentable over Sil et al. (US PGPUB 2024/0070943, hereinafter Sil), in view of Cameron Smith (US PGPUB 2024/0331247, hereinafter Smith). As per as claim 1, Sil discloses: A computer-implemented method comprising: detecting edits to a first image that include apply changes to at least one features of an item displayed in the first image (Sil, e.g., [0062], [0094-0095], “... detects a selection of one of the editing presets...modifies the digital image using the selected editing preset”); generating a modified first image in which the item has been changed (Sil, e.g., [0026], [0039-0040] and [0049-0050], “...generates a recommendation for an editing preset in accordance with an editing intent for editing a digital image...”); determining, from the modified first image, the at least one feature of the item that has been changed (Sil, e.g., [0041-0043], “ ... determines an editing preset that corresponds to the editing intent (e.g., using an editing preset map(s) based on an editing state of an edited digital image associated with the editing preset...the edited digital image database further stores metadata for the edited digital images (e.g., metadata indicating editing operations applied to the edited digital images) and/or the digital images from which the edited digital images where created (e.g., the initial digital images)...”); conducting an image search using the at least one features by modifying a search metric to bias the image search towards locating one or more second images based on the at least one feature of the item that has been changed in the first image to generate the modified first image (Sil, e.g., [0042-0043], and [0053-0054], “... user query includes a request corresponding to a digital image to be edited...”) and further see fig. 7, associating with texts description, [0119-0122], “...user query for editing the digital image...extracts an editing intent...”) and displaying a search result that includes the one or more second images (Sil, e.g., [0061], “... display various potential results for ease in comparison and selection from among the recommendations ...”, [0134], [0170], “...presenting output to a user... provide graphical data to a display for presentation to a user..., e.g., [0045], [0052], “… images providing a visual representation of the search results is discussed as being displayed as the search results…”). To make records clearer regarding to the languages of “modifying a search metric to bias the image search towards locating one or more second images based on the at least one feature of the item that has been changed in the first image to generate the modified first image” (although as stated above, Sil functional discloses the features of “searching/query edited image” (Sil, e.g., [0042-0043], and [0053-0054], “... user query includes a request corresponding to a digital image to be edited...”) and further see fig. 7, associating with texts description, [0119-0122]). However Smith, in an analogous art, discloses “modifying a search metric to bias the image search towards locating one or more second images based on the at least one feature of the item that has been changed in the first image to generate the modified first image” (Smith, e.g., [0577-0578], “… image editing system provides an option to the user of the client device to query a portion to be selected...the user of the client device queries the scene-based image editing system to select “bags”...Based on the query, the scene-based image editing system…”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Smith and Sil to perform facial expression transfer and facial expression animations to generate modified digital images or animations (Smith, e.g., [abstract]). As per as claim 2, the combination of Smith and Sil disclose: (Currently Amended) The method of claim 1, wherein the at least one feature is determined from the modified first image by associating the changes to the item with one or more portions of the first image (Sil, e.g., [0041-0043], “...editing preset that corresponds to the editing intent (e.g., using an editing preset map(s) based on an editing state of an edited digital image associated with the editing preset...the edited digital image database further stores metadata for the edited digital images (e.g., metadata indicating editing operations applied to the edited digital images) and/or the digital images from which the edited digital images where created (e.g., the initial digital images)...”). As per as claim 3, the combination of Smith and Sil disclose: The method of claim 2, wherein the at least one feature is determined by applying a feature extraction technique to at least the one or more portions of the first image (Sil, e.g., [0041-0043], “ ... determines an editing preset that corresponds to the editing intent (e.g., using an editing preset map(s) based on an editing state of an edited digital image associated with the editing preset...the edited digital image database further stores metadata for the edited digital image...the digital images from which the edited digital images where created (e.g., the initial digital images)...”) and see (Smith, e.g., [0577-0578], “… image editing system provides an option to the user of the client device to query a portion to be selected...the user of the client device queries the scene-based image editing system to select “bags”...Based on the query, the scene-based image editing system…”). As per as claim 5, the combination of Smith and Sil disclose: The method of claim 3, further comprising: providing an image editing tool (Sil, e.g., [0051], “…editing tool…”) and further see (Smith, e.g., [0476-0479], “...image editing tool...”); obtaining the at least one interaction with the first image based on edits to the first image made using the editing tool ((Sil, e.g., [0051], “…editing tool…”) and further see (Smith, e.g., [0476-0479], “...image editing tool...”)); associating the edits with the one or more portions of the first image ((Sil, e.g., [0042-0043], [0051], “…editing tool…”) and further see (Smith, e.g., [0476-0479], “...image editing tool...”)); and applying the feature extraction technique to identify the at least one feature of interest according to what was edited using the editing tool (Sil, e.g., [0041-0043], “ ... determines an editing preset that corresponds to the editing intent (e.g., using an editing preset map(s) based on an editing state of an edited digital image associated with the editing preset...the edited digital image database further stores metadata for the edited digital image...the digital images from which the edited digital images where created (e.g., the initial digital images)...”) and see (Smith, e.g., [0577-0578], “… image editing system provides an option to the user of the client device to query a portion to be selected...the user of the client device queries the scene-based image editing system to select “bags”...Based on the query, the scene-based image editing system…”). As per as claim 7, the combination of Smith and Sil disclose: The method of claim 1, wherein the search metric comprises a feature weight (Smith, e.g., [0352], [0359], [0377], “...using weights from the localized object attention feature vector...”). As per as claim 8, the combination of Smith and Sil disclose: The method of claim 1, wherein the at least one feature of interest is determined by: comparing an original first image to a modified first image to determine one or more changes (Sil, e.g., [0061], [0071], [0095], “... compares the tokens to the pre-determined object classes to determine whether the user query references an object...”) and see (Smith, e.g., [0255-0257], “...provides more flexibility for editing digital images when compared...”); and associating the one or more changes with the at least one feature of interest (Sil, e.g., [0061], [0071], [0095], “... compares the tokens to the pre-determined object classes to determine whether the user query references an object...”) and see (Smith, e.g., [0255-0257], “...provides more flexibility for editing digital images when compared...”). As per as claim 9, the combination of Smith and Sil disclose: The method of claim 8, wherein the comparing is performed by prompting a large language model (LLM) to describe what has changed between the original first image and the modified first image (Smith, e.g., [0255-0257], [0407], “...the scene-based image editing system 106 utilizes a machine learning model, such as one of the models (e.g., the clustering and subgraph proposal generation model)” and see [0722], extracts a different level) (search in different level (category and sub-category)) (the examiner asserts that the search in category and sub-category is equivalent to large language model). AS per as claim 11, the combination of Smith and Sil disclose: The method of claim 8, further comprising: providing an image editing tool (Sil, e.g., [0051], “…editing tool…”) and further see (Smith, e.g., [0476-0479], “...image editing tool...”); accepting edits to the original first image using the editing tool to generate the modified first image (Sil, e.g., [0042-0043], “...edited digital image database further stores metadata for the edited digital images (e.g., metadata indicating editing operations applied to the edited digital images) and/or the digital images from which the edited digital images where created (e.g., the initial digital images), and [0051], “…editing tool…”) and further see (Smith, e.g., [0476-0479], “...image editing tool...”); associating the edits with one or more portions of the original first image (((Sil, e.g., [0042-0043], [0051], “…editing tool…”) and further see (Smith, e.g., [0476-0479], “...image editing tool...”)); and applying the feature extraction technique to identify the at least one feature of interest according to what was edited in the original first image using the editing tool (Sil, e.g., [0041-0043], “ ... determines an editing preset that corresponds to the editing intent (e.g., using an editing preset map(s) based on an editing state of an edited digital image associated with the editing preset...the edited digital image database further stores metadata for the edited digital image...the digital images from which the edited digital images where created (e.g., the initial digital images)...”) and see (Smith, e.g., [0577-0578], “… image editing system provides an option to the user of the client device to query a portion to be selected...the user of the client device queries the scene-based image editing system to select “bags”...Based on the query, the scene-based image editing system…”). As per as claim 12, the combination of Smith and Sil disclose: The method of claim 1, further comprising training at least one model associated with one or more of (Sil, e.g., [0069], [0095], [0117], “...utilizes various types of training data in learning the parameters ...”) i) the at least one interaction (Sil, e.g., [0051], [0063], “… an editing operation additionally or alternatively includes front-end processing steps used to modify a digital image (e.g., user interactions, such as selecting a region of the digital image to modify)…”), ii) associating the at least one interaction with one or more portions of the first image (Smith, e.g., [0577-0578], “… image editing system provides an option to the user of the client device to query a portion to be selected...the user of the client device queries the scene-based image editing system to select “bags”...Based on the query, the scene-based image editing system…”) and (Sil, e.g., [0041-0043]) or iii) the image search (Sil, e.g., [0042-0043], and [0053-0054], “... user query includes a request corresponding to a digital image to be edited...”) and further see fig. 7, associating with texts description, [0119-0122], “...user query for editing the digital image...extracts an editing intent...”), “…image search…”); based on an item category associated with the at least one feature of interest (Smith, e.g., [0577-0578], “… image editing system provides an option to the user of the client device to query a portion to be selected...the user of the client device queries the scene-based image editing system to select “bags”...Based on the query, the scene-based image editing system…”). As per as claim 13, the combination of Smith and Sil disclose: The method of claim 12, wherein the item category comprises a product category (Sil, e.g.,[0095], “... digital images that represent hundreds of object classes/categories...” and [0107-0108]) and further see (Smith, e.g., [0266-0268]). As per as claim 14, the combination of Smith and Sil disclose: The method of claim 1, wherein the at least one interaction comprises selection of a feature, zooming of a feature, annotating a feature, or identifying a feature using text, voice or eye gaze (Smith, e.g., [0577-0578], “… image editing system provides an option to the user of the client device to query a portion to be selected...the user of the client device queries the scene-based image editing system to select “bags”...Based on the query, the scene-based image editing system…”) and (Sil, e.g., [0041-0043]). As pe s claim 15, the combination of Smith and Sil disclose: The method of claim 1, wherein the first image is obtained from a first search and the image search corresponds to a subsequent search tool (Sil, e.g., [0041-0043], “ ... determines an editing preset that corresponds to the editing intent (e.g., using an editing preset map(s) based on an editing state of an edited digital image associated with the editing preset...the edited digital image database further stores metadata for the edited digital image...the digital images from which the edited digital images where created (e.g., the initial digital images)...”) and see (Smith, e.g., [0577-0578], “… image editing system provides an option to the user of the client device to query a portion to be selected...the user of the client device queries the scene-based image editing system to select “bags”...Based on the query, the scene-based image editing system…”). Claims 16-19 and 21-22 are essentially the same as claims 1-3, 5-9 and 11-15 except that they set forth the claimed invention as a system rather a method, respectively and correspondingly, therefore is rejected under the same reasons set forth in rejections of claims 1-3, 5-9 and 11-15. Claim 20 is essentially the same as claim 1 except that it set forth the claimed invention as a computer readable medium rather a method, respectively and correspondingly, therefore is rejected under the same reasons set forth in rejections of claim 1. Claim 6 is rejected under 35 U.S.C. 103 as being unpatentable over Sil et al. (US PGPUB 2024/0070943, hereinafter Sil), in view of Cameron Smith (US PGPUB 2024/0331247, hereinafter Smith) and further in view of Hao et al. (US Patent 10,467,507, hereinafter Hao). As per as claim 6, the combination of Smith and Sil disclose: The method of claim 3, wherein the feature extraction technique applies a scale invariant feature transform (SIFT) algorithm. The combination of Smith and Sil disclose extraction technique (Sil, e.g., [0022-0023], [0027-029], “...query to extract information—such as an editing intent and/or an object—that indicates targeted modifications for a digital image...”), but do not explicitly disclose “applies a scale invariant feature transform (SIFT) algorithm”. However Hao, in an analogous art, discloses “applies a scale invariant feature transform (SIFT) algorithm ” ( (Hao, e.g., [col. 10, lines 10-30], “…scale-invariant feature transform (SIFT)…”). Thus, it would have been obvious to one of ordinary skill in the art BEFORE the effective filling date of the claimed invention to combine the teaching of Hao and Smith and Sil to extract visual features from tagged stock images from database and generate a search index of visual features, the tagged stock images may be tagged with verified data containing product information related to what each stock image depicts (Hao, e.g., [col. 10, lines 47-60]). Response to Arguments The Examiner respectfully reminds applicant of the broadest reasonable interpretation standard (See MPEP 2111), "During examination, the claims must be interpreted as broadly as their terms reasonably allow." In re American Academy of Science Tech Center, 367 F.3d 1359, 1369, 70 USPQ2d 1827, 1834 (Fed. Cir. 2004) (The USPTO uses a different standard for construing claims than that used by district courts; during examination the USPTO must give claims their broadest reasonable interpretation.) In Phillips v. AWH Corp., 415 F.3d 1303, 75 USPQ2d 1321 (Fed. Cir. 2005), the court further elaborated on the “broadest reasonable interpretation" standard and recognized that “The Patent and Trademark Office (“PTO") determines the scope of claims in patent applications not solely on the basis of the claim language, but upon giving claims their broadest reasonable construction." Thus, when interpreting claims, the courts have held that Examiners should (1) interpret claim terms as broadly as their terms reasonably allows and (2) interpret claim phrases as broadly as their construction reasonably allows. Applicant’s arguments filed 10/29/2025 with respect to claims 1-3, 5-9 and 11-22 have been considered but are moot in view of the new ground(s) of rejection necessitated by applicant's amendment to the claims. Applicant's newly amended features are taught implicitly, expressly, or impliedly by the prior art of record (See the new ground(s) of rejection set forth herein above). The Examiner respectfully submits that, with respect to the totally newly amended subject matter, the Examiner respectfully cited proper paragraphs from cited reference to reject the claim in responsive to the newly amended, please refer to the corresponding section of the office action. Additional Art Considered The prior art made of record and not relied upon is considered pertinent to the Applicants’ disclosure. The following patents and papers are cited to further show the state of the art at the time of Applicants’ invention with respect to modifying search metrics based on features of interest determined from interactions with images which is inputting indicative of features in one image of an item that the user is most interested in , and revise/refine a subsequent search for images containing the items or features of interest. The search query chosen keywords may be modified or augmented based on the items or features of interest and retrieve the search results by selecting items based on the most similar features that user input. a. Lin et al. (US PGPUB 2022/0122306, hereafter Lin); “Attribute Control Techniques For Image Editing” discloses “ image editing system computes a metric for an attribute in an input image as a function of a latent space representation of the input image and a filtering vector for editing the input image”. Lin also teaches image editing tools provide features that enable a user to edit or modify an image [0040-0041]. Lin further teaches querying an external database to retrieve a set of training images including an attribute that is uncommon in the first training data set and generating latent space representations of the training images with the attribute that is uncommon in the first training data set [0156-0157]. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUAN A PHAM whose telephone number is (571)270-3173. The examiner can normally be reached M-F 7:45 AM - 6:30 PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tony Mahmoudi can be reached on 571-272-4078. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TUAN A PHAM/Primary Examiner, Art Unit 2163
Read full office action

Prosecution Timeline

Nov 06, 2023
Application Filed
Oct 19, 2024
Non-Final Rejection — §103
Jan 08, 2025
Response Filed
Mar 08, 2025
Final Rejection — §103
May 06, 2025
Response after Non-Final Action
Jun 04, 2025
Interview Requested
Jun 11, 2025
Applicant Interview (Telephonic)
Jun 12, 2025
Request for Continued Examination
Jun 13, 2025
Examiner Interview Summary
Jun 18, 2025
Response after Non-Final Action
Aug 09, 2025
Non-Final Rejection — §103
Oct 29, 2025
Response Filed
Jan 30, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12596679
METHOD AND APPARATUS PROVIDING A TIERED ELASTIC CLOUD STORAGE TO INCREASE DATA RESILIENCY
2y 5m to grant Granted Apr 07, 2026
Patent 12596758
IoT Enhanced Search Results
2y 5m to grant Granted Apr 07, 2026
Patent 12585718
System and Method for Feature Determination and Content Selection
2y 5m to grant Granted Mar 24, 2026
Patent 12572561
METHOD AND APPARATUS FOR SYNCHRONOUSLY UPDATING METADATA IN DISTRIBUTED DATABASE
2y 5m to grant Granted Mar 10, 2026
Patent 12566777
SYSTEMS AND METHODS OFFLINE DATA SYNCHRONIZATION
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+27.8%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 697 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month