Prosecution Insights
Last updated: April 19, 2026
Application No. 17/834,072

METHOD FOR FINDING PRODUCTS IN A NETWORK

Non-Final OA §103
Filed
Jun 07, 2022
Examiner
BROUGHTON, KATHLEEN M
Art Unit
2661
Tech Center
2600 — Communications
Assignee
Patty'S GmbH
OA Round
3 (Non-Final)
83%
Grant Probability
Favorable
3-4
OA Rounds
2y 7m
To Grant
92%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
219 granted / 263 resolved
+21.3% vs TC avg
Moderate +8% lift
Without
With
+8.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
34 currently pending
Career history
297
Total Applications
across all art units

Statute-Specific Performance

§101
10.9%
-29.1% vs TC avg
§103
51.2%
+11.2% vs TC avg
§102
24.1%
-15.9% vs TC avg
§112
11.4%
-28.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 263 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on September 15, 2025 has been entered. Response to Amendment Receipt is acknowledged of claim amendments with associated arguments/remarks, received September 15, 2025. Claims 1, 5-9, 13-21 are pending in which claims 1, 17 were amended. Claims 2-4, 10-12 were cancelled. Response to Arguments Applicant’s argument, see pg , filed September 15, with respect rejections of claims 1, 5-9, 13-21 under 35 U.S.C. § 103 has been fully considered but is not persuasive. Applicant summarizes the “aim of the present invention is to create a plurality of pictures of an object only with textual (or linguistic) input which object is not actually known by the user…and one (or more) of them can be elected by the user” of which elected image “is the basis for a search for "actual" or "real" objects within a network which "real" objects are similar to the desired object” (Remarks – 09/15/2025 pg 7) The applicant provides an example of the “fictitious object” (“a blue tabletop” with “five black legs”) with specification citation to support the amendment that “all the specifications” are included in the generated plurality of image representations (Remarks – 09/15/2025 pg 8). Applicant argues the cited art Surya et al (US 10713821), applied to claims 1, 13-17, 19-21, does not teach “a plurality of pictures with all the given features” because the prior art performs iterative modifications to generate the image of an object according to the text input of the user, and cites to Surya et al (col 11, ln 25-39) to teach alternative renditions other than all of the given features (Remarks – 09/15/2025 pg 9). After careful review and consideration of applicant’s claimed invention as compared to the cited prior art, the applicant’s argument is not persuasive. In review of Surya et al, the applicant cites to generation of additional, alternative images (414, 416, 418) as to not teach “all the given features” from the text input. However, Surya et al first states that images are generated based on all of the given text features as requested by a user (“For example, the user may initially input the search query 402—“women's blue pants.” FIG. 4 illustrates three synthetic images—408, 410, and 412—that were generated in response to the query 402 using the stage-I and stage-II GANs described above in reference to FIGS. 1-2.” Col 11 ln 15-20). Surya et al explicitly states and provides an example of how a user text search query with multiple features (“women’s” “blue” “pants”) is converted to multiple synthetic images (408, 410, 412) to represent variations on the identical object, which is then presented to the user. Respectfully, the applicant’s argument is not persuasive. The applicant cites the secondary reference Yada et al (US 2020/0356591), applied to claims 1, 13-17, 19-21, to not cure the deficiencies claimed by applicant’s argument regarding Surya et al (Remarks – 09/15/2025 pg 9-10). Respectfully, the examiner is not persuaded by applicant’s argument regarding Surya et al (as discussed above). The applicant cites additional dependent claim reference Walters et al (US 2020/0226807), applied to claims 5-9, 18, to not cure the deficiencies claimed by applicant’s argument regarding Surya et al (Remarks – 09/15/2025 pg 10). Respectfully, the examiner is not persuaded by applicant’s argument regarding Surya et al (as discussed above). The examiner notes, the applicant’s invention was disclosed particularly broadly with no details provided in the claimed artificial intelligence model regarding structure of how the function achieves the results. The figure provided does not remedy this concern. Regarding the specification, the applicant states that image generation programs are already known in the art and provides the examples of DALL-E and GPT-3 (Specification – 06/07/2022, ¶ [0009]) and the use of image search engines to identify similar objects are also well known in the art, which the examiner noted in the prior art rejection (most recently in Final Rejection – 04/15/2025, claim 1 103 rejection “generating” step on page 6). The disclosure, specification, abstract, drawings were not readily identified to provide additional details regarding novel new ways to carry out the claimed invention or novel structures in how to carry out the claimed invention as compared to what was known to one of ordinary skill in the art at the time of filing. Respectfully, the examiner did not readily identify novel features distinguishable from the prior art based on the applicant’s disclosure of the claimed invention. No further argument is presented. All arguments were addressed. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 13-17, 19-21 are rejected under 35 U.S.C. 103 as being unpatentable over Surya et al (US 10713821) in view of Yada et al (US 2020/0356591). Regarding Claim 1, Surya et al teach a method for finding products in a network (generating synthetic images of products; Fig 4 and col 11 ln 15-22), comprising the steps of: generating a plurality of image representations of a fictitious non-existing object (applicant discloses generation of a ‘fictitious non-existing object’ is known in the art, specification ¶ [0008]-[0010] and Fig 1) with aid of linguistic or textual specifications for an image generation program of an electronic data processing system with Artificial Intelligence (an input search query 402 (text specifications) are input to generate synthetic images 408, 410 412 using the stage-I and stage-II GANs (AI image generation); Figs 1-4 and col 11 ln 15-22), which generated plurality of image representations each contain all of the specifications of the fictitious non-existing object (the synthesized images 408, 410, 412 each demonstrate variations of the text input features 402; Fig 4 and col 11 ln 15-22); displaying the generated plurality of image representations on a display device of the electronic data processing system of a user (the synthesized images 408, 410, 412 from generators 150, 250 are displayed to a user on a display component 306; Figs 3, 4 and col 9 ln 41-50); subjecting at least one of the generated plurality of image representations selected by the user to an image analysis by the electronic data processing system (a particular representation of object of interest may be selected for further analysis, such as by a user with a touch-sensitive display 306; col 8 ln 24-31, col 10 ln 3-8). Surya et al does not teach searching with the electronic data processing system for identical or similar real products in the network with a result of the image analysis; and displaying at least one identical or similar real product, found as a result of the searching, on the display device of the electronic data processing system of the user. Yada et al is analogous art pertinent to the technological problem addressed in this application and teaches searching with the electronic data processing system for identical or similar real products in the network with a result of the image analysis (the generated image Ig according to user specifications is the query image input to retrieval component 110 with a search engine 112 and index 114 to search for similar real object images (products) based on the synthetic query image; Figs 1, 4 and ¶ [0041]-[0045], [0058]); and displaying at least one identical or similar real product, found as a result of the searching, on the display device of the electronic data processing system of the user (candidate images of real objects that match the query image representing the synthetic object image searched are displayed to the user by the retrieval component 110 on the display 1822 of user device 1802; Figs 1, 18 and ¶ [0046], [0120]). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Surya et al with Yada et al including searching with the electronic data processing system for identical or similar real products in the network with a result of the image analysis; and displaying at least one identical or similar real product, found as a result of the searching, on the display device of the electronic data processing system of the user. By allowing a user to craft a custom query image for a search, a consumer can generate the object of interest prior to searching for the product, thereby streamlining the process to search, as recognized by Yada et al (¶ [0001]-[0002]). Regarding Claim 13, Surya et al in view of Yada et al teach the method according to claim 1 (as described above), wherein the at least one of the generated plurality of image representations selected by the user for the image analysis is displayed on the display device next to the at least one identical or similar real product found as result of the searching (Yada et al, candidate images of real objects that match the query image representing the synthetic object image searched are displayed to the user by the retrieval component 110 on the display 1822 of user device 1802; Figs 1, 3, 4, 18 and ¶ [0046]-[0047], [0120]). Regarding Claim 14, Surya et al in view of Yada et al teach the method according to claim 1 (as described above), wherein the image analysis comprises a similarity analysis of the at least one of the generated plurality of image representations selected by the user for the image analysis (Yada et al, the generated image Ig is searched for one or more similar candidate images in the search engine 112 which are then retrieved by the retrieval component and presented to the user via a user interface; Figs 3, 4 and ¶ [0044]-[0047]). Regarding Claim 15, Surya et al in view of Yada et al teach the method according to claim 1 (as described above), wherein the at least one identical or similar real product found by the searching is selectable with an operating pointer to obtain further information about the at least one identical or similar real product (Yada et al, candidate images of real objects that match the query image representing the synthetic object image searched are displayed to the user on the display 1822, which can be a GUI 1824 of user device 1802, and the user can then select a link about a product using either a mouse or the GUI to input and learn more information about the product; Figs 1, 4, 18 and ¶ [0049], [0063], [0120]). Regarding Claim 16, Surya et al in view of Yada et al teach the method according to claim 1 (as described above), wherein a plurality of identical or similar real products are found by the searching and are displayed on the display device according to predeterminable criteria (Yada et al, candidate images of real objects that match the query image representing the synthetic object image searched are displayed to the user on the display 1822, which ca be further refined with the item selection component 118; Figs 1, 4, 18 and ¶ [0049], [0063], [0120]). Regarding Claim 17, Surya et al teach a method for finding products in a network (generating synthetic images of products; Fig 4 and col 11 ln 15-22), comprising the steps of: generating a plurality of image representations of a fictitious non-existing object (applicant describes an example ‘fictitious non-existing object’ as a table, specification ¶ [0010] and Fig 1) with aid of linguistic or textual specifications for an image generation program of an electronic data processing system with Artificial Intelligence (an input search query 402 (text specifications) are input to generate synthetic images 408, 410 412 using the stage-I and stage-II GANs (AI image generation); Figs 1-4 and col 11 ln 15-22), which generated plurality of image representations each contain all of the specifications of the fictitious non-existing object (the synthesized images 408, 410, 412 each demonstrate variations of the text input features 402; Fig 4 and col 11 ln 15-22); displaying the generated plurality of image representations together on a display device of the electronic data processing system of a user (the synthesized images 408, 410, 412 from generators 150, 250 are displayed to a user on a display component 306; Figs 3, 4 and col 9 ln 41-50); subjecting at least one of the generated plurality of image representations selected by the user to an image analysis by the electronic data processing system (a particular representation of object of interest may be selected for further analysis, such as with a touch-sensitive display 306; col 8 ln 24-31, col 10 ln 3-8). Surya et al does not teach searching with the electronic data processing system for identical or similar real products in the network with a result of the image analysis; and displaying at least one identical or similar real product, found as a result of the searching, on the display device of the electronic data processing system of the user. Yada et al is analogous art pertinent to the technological problem addressed in this application and teaches searching with the electronic data processing system for identical or similar real products in the network with a result of the image analysis (the generated image Ig according to user specifications is the query image input to retrieval component 110 with a search engine 112 and index 114 to search for similar real object images (products) based on the synthetic query image; Figs 1, 4 and ¶ [0041]-[0045], [0058]); and displaying at least one identical or similar real product, found as a result of the searching, on the display device of the electronic data processing system of the user (candidate images of real objects that match the query image representing the synthetic object image searched are displayed to the user by the retrieval component 110 on the display 1822 of user device 1802; Figs 1, 18 and ¶ [0046], [0120]). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Surya et al with Yada et al including searching with the electronic data processing system for identical or similar real products in the network with a result of the image analysis; and displaying at least one identical or similar real product, found as a result of the searching, on the display device of the electronic data processing system of the user. By allowing a user to craft a custom query image for a search, a consumer can generate the object of interest prior to searching for the product, thereby streamlining the process to search, as recognized by Yada (¶ [0001]-[0002]). Regarding Claim 19, Surya et al in view of Yada et al teach the method according to claim 17 (as described above), wherein the at least one of the generated plurality of image representations selected by the user for the image analysis is displayed on the display device next to the at least one identical or similar real product found as result of the searching (Yada et al, candidate images of real objects that match the query image representing the synthetic object image searched are displayed to the user by the retrieval component 110 on the display 1822 of user device 1802; Figs 1, 3, 4, 18 and ¶ [0046]-[0047], [0120]). Regarding Claim 20, Surya et al in view of Yada et al teach the method according to claim 17 (as described above), wherein the image analysis comprises a similarity analysis of the at least one of the generated plurality of image representations selected by the user for the image analysis (Yada et al, the generated image Ig is searched for one or more similar candidate images in the search engine 112 which are then retrieved by the retrieval component and presented to the user via a user interface; Figs 3, 4 and ¶ [0044]-[0047]). Regarding Claim 21, Surya et al in view of Yada et al teach the method according to claim 17 (as described above), wherein a plurality of identical or similar real products are found by the searching and are displayed on the display device according to predeterminable criteria (Yada et al, candidate images of real objects that match the query image representing the synthetic object image searched are displayed to the user on the display 1822, which ca be further refined with the item selection component 118; Figs 1, 4, 18 and ¶ [0049], [0063], [0120]). Claims 5-9, 18 are rejected under 35 U.S.C. 103 as being unpatentable over Surya et al (US 10713821) in view of Yada et al (US 2020/0356591) and Walters et al (US 2020/0226807). Regarding Claim 5, Surya et al in view of Yada et al teach the method according to claim 1 (as described above). Surya et al in view of Yada et al do not teach wherein several of the generated plurality of image representations of non-existing objects are selectable and at least one new pictorial representation of a non-existing object is generated from the selected generated plurality of image representations of non-existing objects. Walters et al is analogous art pertinent to the technological problem addressed in this application and teaches wherein several of the generated plurality of image representations of non-existing objects are selectable (image creation logic circuitry 1015 may select one or more system architectures 1020, 1022, 1024, which each generate different synthetic images (synthetic images are equivalent to described plurality of image representations of non-existing objects); Figs 1A, 1E and ¶ [0040]) and at least one new pictorial representation of a non-existing object is generated from the selected generated plurality of image representations of non-existing objects (selected models synthetic images are combined with the template 1045, based on customer device 1040 selected parameter set, to create a new image 1055; Figs 1A, 1E and ¶ [0040]). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Yada et al and Surya et al with Walters et al including wherein several of the generated plurality of image representations of non-existing objects are selectable and at least one new pictorial representation of a non-existing object is generated from the selected generated plurality of image representations of non-existing objects. By combining multiple synthetic images a unique iterative synthetic image is generated, which may be used to better train the neural network, thereby improve the efficiency of the generator and discriminator, as recognized by Walters et al (¶ [0022]-[0023]). Regarding Claim 6, Surya et al in view of Yada et al and Walters et al teach the method according to claim 5 (as described above), wherein the at least one generated pictorial representation of the non-existing object is displayed on the display device next to a display of the at least one identical or similar real product (Yada et al, candidate images of real objects that match the query image representing the synthetic object image searched are displayed to the user by the retrieval component 110 on the display 1822 of user device 1802; Figs 1, 4, 18 and ¶ [0046], [0120]). Regarding Claim 7, Surya et al in view of Yada et al and Walters et al teach the method according to claim 6 (as described above), wherein the image analysis comprises a similarity analysis of the at least one pictorial representation of the non-existing object (Yada et al, the generated image Ig can be further mixed by fixed weights or by the user to further modify the synthetic image by refining the image to generate more or less similarity to select features based on the user preference; Figs 1, 3, 6-8 and ¶ [0041], [0057]-[0058], [0067]-[0076]). Regarding Claim 8, Surya et al in view of Yada et al and Walters et al teach the method according to claim 7 (as described above), wherein the at least one identical or similar real product found by the searching is selectable with an operating pointer to obtain further information about the at least one identical or similar real product (Yada et al, candidate images of real objects that match the query image representing the synthetic object image searched are displayed to the user on the display 1822, which can be a GUI 1824 of user device 1802, and the user can then select a link about a product using either a mouse or the GUI to input and learn more information about the product; Figs 1, 4, 18 and ¶ [0049], [0063], [0120]). Regarding Claim 9, Surya et al in view of Yada et al and Walters et al teach the method according to claim 8 (as described above), wherein a plurality of identical or similar real products are found by the searching and are displayed on the display device according to predeterminable criteria (Yada et al, candidate images of real objects that match the query image representing the synthetic object image searched are displayed to the user on the display 1822, which ca be further refined with the item selection component 118; Figs 1, 4, 18 and ¶ [0049], [0063], [0120]). Regarding Claim 18, Surya et al in view of Yada et al teach the method according to claim 17 (as described above). Surya et al in view of Yada et al do not teach wherein several of the generated plurality of image representations of non-existing objects are selectable and at least one new pictorial representation of a non-existing object is generated from the selected generated plurality of image representations of non-existing objects. Walters et al is analogous art pertinent to the technological problem addressed in this application and teaches wherein several of the generated plurality of image representations of non-existing objects are selectable (image creation logic circuitry 1015 may select one or more system architectures 1020, 1022, 1024, which each generate different synthetic images (synthetic images are equivalent to described plurality of image representations of non-existing objects); Figs 1A, 1E and ¶ [0040]) and at least one new pictorial representation of a non-existing object is generated from the selected generated plurality of image representations of non-existing objects (selected models synthetic images are combined with the template 1045, based on customer device 1040 selected parameter set, to create a new image 1055; Figs 1A, 1E and ¶ [0040]). It would have been obvious to one of ordinary skill in the art before the effective filing date of this application to combine the teachings of Yada et al and Surya et al with Walters et al including wherein several of the generated plurality of image representations of non-existing objects are selectable and at least one new pictorial representation of a non-existing object is generated from the selected generated plurality of image representations of non-existing objects. By combining multiple synthetic images a unique iterative synthetic image is generated, which may be used to better train the neural network, thereby improve the efficiency of the generator and discriminator, as recognized by Walters et al (¶ [0022]-[0023]). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Jin et al (US 2018/0357519) teaches a system and method for generating an image based on a description of an object and then searching for a similar image of the object generated in a database with multiple results displayed to a user. Martinez et al (US 11568576) teaches techniques for generation of photorealistic synthetic image data based on use of a generator and discriminator network which can be used towards generation of images of with products for a customer to evaluate the product. Bedi et al (US 2022/0101578) teaches a method, system and computer readable media for generating a composite image based on a generated fixed object in the image. Pinel et al (US 2020/0167832) teaches a system and method for generating advertisements based on a compilation of multiple layouts which can be altered based on user preference. Any inquiry concerning this communication or earlier communications from the examiner should be directed to KATHLEEN M BROUGHTON whose telephone number is (571)270-7380. The examiner can normally be reached Monday-Friday 8:00-5:00. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, John Villecco can be reached on (571) 272-7319. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KATHLEEN M BROUGHTON/Primary Examiner, Art Unit 2661
Read full office action

Prosecution Timeline

Jun 07, 2022
Application Filed
Aug 15, 2024
Non-Final Rejection — §103
Jan 21, 2025
Response Filed
Apr 09, 2025
Final Rejection — §103
Sep 15, 2025
Request for Continued Examination
Sep 17, 2025
Response after Non-Final Action
Oct 12, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602915
FEATURE FUSION FOR NEAR FIELD AND FAR FIELD IMAGES FOR VEHICLE APPLICATIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12597233
SYSTEM AND METHOD FOR TRAINING A MACHINE LEARNING MODEL
2y 5m to grant Granted Apr 07, 2026
Patent 12586203
IMAGE CUTTING METHOD AND APPARATUS, COMPUTER DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 24, 2026
Patent 12567227
METHOD AND SYSTEM FOR UNSUPERVISED DEEP REPRESENTATION LEARNING BASED ON IMAGE TRANSLATION
2y 5m to grant Granted Mar 03, 2026
Patent 12565240
METHOD AND SYSTEM FOR GRAPH NEURAL NETWORK BASED PEDESTRIAN ACTION PREDICTION IN AUTONOMOUS DRIVING SYSTEMS
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
83%
Grant Probability
92%
With Interview (+8.3%)
2y 7m
Median Time to Grant
High
PTA Risk
Based on 263 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month