Prosecution Insights
Last updated: April 19, 2026
Application No. 18/801,387

METHOD, SYSTEM, AND DEVICE OF VIRTUAL DRESSING UTILIZING IMAGE PROCESSING, MACHINE LEARNING, AND COMPUTER VISION

Non-Final OA §103§DP
Filed
Aug 12, 2024
Examiner
TSWEI, YU-JANG
Art Unit
2614
Tech Center
2600 — Communications
Assignee
Walmart Apollo LLC
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
99%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
376 granted / 447 resolved
+22.1% vs TC avg
Strong +17% interview lift
Without
With
+17.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
44 currently pending
Career history
491
Total Applications
across all art units

Statute-Specific Performance

§101
5.5%
-34.5% vs TC avg
§103
66.4%
+26.4% vs TC avg
§102
5.6%
-34.4% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 447 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1-10 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-10 of app 17/578,404 (now is US patent US 12062114 B2). Although the claims at issue are not identical, they are not patentably distinct from each other because they both claim the same subject matters and limitations as explained below. Claim 1 is determined to be obvious in light of claim 1 of 17/578,404 (now is US patent US 12062114 B2) based on reasons below for having similar limitations. Instant application claims 1 17/578,404 (US 12062114 B2) claim 1 1. A system comprising: one or more processors; and one or more non-transitory computer-readable media storing computing instructions that, when executed on the one or more processors, cause the one or more processors to perform operations comprising: 1. A system comprising: one or more processors; and one or more non-transitory computer-readable media storing computing instructions that, when executed on the one or more processors, cause the one or more processors to perform operations comprising: generating a first search result comprising a first item in response to a first user query; generating two or more second search results comprising two or more second items in response to two or more second user queries, wherein the two or more second user queries are for items of one or more different kinds from the first user query; after generating the first search result and the two or more second search results, modifying the first search result by generating a combination image that depicts subject matter engaging with a first item of a first search result and at least one of two or more second items of two or more second search results, wherein the combination image incorporates a background of an image of a user-defined subject matter, and wherein the subject matter comprises the user-defined subject matter; and generating one or more user interface elements to be displayed in conjunction with the combination image; and enabling the one or more user interface elements to perform one or more operations comprising at least one of: accessing a source of the first item, adding the first item to a shopping cart, or purchasing the first item, wherein the combination image comprises one or more of: a two-dimensional image, a three-dimensional image, a three-dimensional model, a video, or an animation. generating a combination image that depicts subject matter engaging with the first item of the first search result and at least one of the two or more second items of the two or more second search results; and generating one or more user interface elements to be displayed in conjunction with the combination image, wherein the one or more user interface elements enable a user to perform one or more operations, wherein the one or more operations comprises one or more of accessing a source of the first item, adding the first item to a shopping cart, and purchasing the first item, wherein the combination image comprises one or more of two-dimensional images, three-dimensional images, three-dimensional models, videos, or animations, wherein the combination image incorporates a background of an image of a user-defined subject matter, and wherein the subject matter comprises the user-defined subject matter. Claim 2 is determined to be obvious in light of claim 2 of 17/578,404 (now is US patent US 12062114 B2) based on reasons below for having similar limitations. Instant application claim 2 17/578,404 claim 2 2. The system of claim 1, wherein: the first search result comprises a plurality of first search results that each depict a respective different instance of the first item; and the combination image comprises a plurality of combination images, each of which is generated for a different one of the plurality of first search results to produce the user-defined subject matter engaging with different ones of the respective different instances of the first item. 2. The system of claim 1, wherein: the first search result further comprises a plurality of first search results that each depict a respective different instance of the first item; and the combination image comprises a plurality of combination images, each of which is generated for a different one of the plurality of first search results to produce the user-defined subject matter engaging with different ones of the respective different instances of the first item. Claim 3 is determined to be obvious in light of claim 3 of 17/578,404 (now is US patent US 12062114 B2) based on reasons below for having similar limitations. Instant application claim 3 17/578,404 claim 3 3. The system of claim 1, wherein: the user-defined subject matter comprises a person, a place, or a thing; the person comprises a user or a user-selected third party; the first item comprises an item worn by the user or the user-selected third party; the first item is selected from at least one of: clothing articles, accessories, or other items worn by the person, wherein the accessories are selected from at least one of: jewelry, eyewear, cosmetics, timepieces, piercing items, purses, bags, handbags, suitcases, or other carriables; the place or the thing is selected from at least one of: indoor structures, outdoor structures, or locations; the first item further comprises an item that is placed on or in the indoor structures, the outdoor structures, or the locations; and the first item is further selected from at least one of: furniture, carpeting, or art. 3. The system of claim 2, wherein: the user-defined subject matter comprises a person, a place, or a thing; the person comprises the user and a user-selected third party; the first item comprises an item worn by the user and the user-selected third party; the first item is selected from the group consisting of clothing articles, accessories, and other items worn by the person; the accessories are selected from the group consisting of jewelry, eyewear, cosmetics, timepieces, piercing items, purses, bags, hand bags, suitcases, and other carriables; the place or the thing is selected from the group consisting of indoor structures, outdoor structures, and locations; the first item further comprises an item that can be placed on or in the indoor structures, the outdoor structures, and the locations; and the first item is further selected from the group consisting of furniture, carpeting, and pieces of art. Claim 4 is determined to be obvious in light of claim 4 of 17/578,404 (now is US patent US 12062114 B2) based on reasons below for having similar limitations. Instant application claim 4 17/578,404 claim 4 4. The system of claim 1, wherein: the user-defined subject matter is defined by receiving an input that comprises (i) the image of the user-defined subject matter or (ii) a specification of one or more characteristics of the user-defined subject matter. 4. The system of claim 3, wherein: the user-defined subject matter is defined by receiving an input that comprises an image of the user-defined subject matter; and the user-defined subject matter is defined by receiving an input that comprises a specification of one or more characteristics of the user-defined subject matter. Claim 5 is determined to be obvious in light of claim 5 of 17/578,404 (now is US patent US 12062114 B2) based on reasons below for having similar limitations. Instant application claim 5 17/578,404 claim 5 5. The system of claim 1, wherein: the first search result comprises one or more image-based search results that each depict a different instance of the first item; and the combination image is generated for the one or more image-based search results to produce one or more respective combination images that each depict a user wearing one of the different instances of the first item. 5. The system of claim 1, wherein: the first item is a clothing article, accessory, or other item that can be worn by a person; the first search result further comprises one or more image-based search results that each depict a different instance of the first item; and the combination image is generated for the one or more image-based search results to produce one or more respective combination images that each depict the user wearing one of the different instances of the first item. Claim 6 is determined to be obvious in light of claim 6 of 17/578,404 (now is US patent US 12062114 B2) based on reasons below for having similar limitations. Instant application claim 6 17/578,404 claim 6 6. The system of claim 5, wherein the operations further comprise: modifying the combination image in response to receiving an input to switch from a first user-defined subject matter engaging with the first item to a second user-defined subject matter engaging with the first item, wherein modifying the combination image further comprises calibrating, normalizing, or resizing the one or more image-based search results, and wherein the subject matter comprises the first user-defined subject matter. 6. The system of claim 5, wherein the computing instructions, when further executed on the one or more processors, further cause the one or more processors to perform an operation comprising: modifying the combination image in response to receiving an input to switch from a first user-defined subject matter engaging with the first item to a second user-defined subject matter engaging with the first item, wherein modifying the combination image comprises calibrating, normalizing, and resizing of the one or more image-based search results, and wherein the subject matter comprises the first user-defined subject matter. Claim 7 is determined to be obvious in light of claim 7 of 17/578,404 (now is US patent US 12062114 B2) based on reasons below for having similar limitations. Instant application claim 7 17/578,404 claim 7 7. The system of claim 1, wherein the first item comprises clothing, wherein information is used to limit the first search result based on a clothing size or a clothing style. 7. The system of claim 1, wherein the first item comprises clothing and wherein information is used to limit the first search result based on a clothing size or a clothing style. Claim 8 is determined to be obvious in light of claim 8 of 17/578,404 (now is US patent US 12062114 B2) based on reasons below for having similar limitations. Instant application claim 8 17/578,404 claim 8 8. The system of claim 1, wherein: the one or more user interface elements further comprise one or more link elements for enabling a user to perform the one or more operations involving the first item displayed in the combination image. 8. The system of claim 1, wherein: the one or more user interface elements further comprise one or more link elements for enabling the user to perform the one or more operations involving the first item displayed in the combination image. Claim 9 is determined to be obvious in light of Claim 9 of 17/578,404 (now is US patent US 12062114 B2) based on reasons below for having similar limitations. Instant application Claim 9 17/578,404 Claim 9 9. The system of claim 1, wherein the operations further comprise: filtering the first search result or a first user query based on information determined from a computer image-analysis of a user-specified image that depicts the user-defined subject matter. 9. The system of claim 1, wherein the computing instructions, when further executed on the one or more processors, further cause the one or more processors to perform an operation comprising: filtering the first search result or the first user query based on information determined from computer image-analysis of a user-specified image that depicts a user-defined subject matter. Claim 10 is determined to be obvious in light of Claim 10 of 17/578,404 (now is US patent US 12062114 B2) based on reasons below for having similar limitations. Instant application Claim 10 17/578,404 Claim 10 10. The system of claim 9, wherein: the information determined from the computer image-analysis of the user-specified image comprises information about one or more size dimensions of a person, a place, or a thing; and the first search result or the first user query is modified to exclude items that are not compatible with or relevant to the one or more size dimensions. 10. The system of claim 9, wherein: the user-defined subject matter comprises a person, a place or a thing; the information determined from the computer image-analysis of the user-specified image comprises information about one or more size dimensions of the person, the place, or the thing; and the first search result or the first user query is modified to exclude items that are not compatible with or relevant to the one or more size dimensions. Claims 11-20, they recite limitations similar in scope to the limitations of Claims 1-10 of the instant application but as a method which determined to be obvious in light of claim 11-20 of 17/578,404 (now is US patent US 12039658 B1) which recite limitations similar in scope to the limitations of Claims 1-10 of 17/578,404 (now is US patent US 12039658 B1) based on same reason described above for having similar limitations as described above for Claims 1-10. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Osada et al. (US 20160042564 A1, hereinafter Osada) in view of Mo (US 20090112862 A1). Regarding Claim 1, Osada teaches a system comprising: one or more processors (Osada, Paragraph [0031], "FIG. 1 is a schematic view of a virtual try-on system 1 of the embodiment";[0370], "The CPU 86 is a computing unit that controls various processes at the virtual try-on apparatus 10”); one or more non-transitory computer-readable media storing computing instructions that, when executed on the one or more processors, cause the one or more processors to perform operations comprising (Osada, Paragraph [0374], "The programs for realizing the foregoing various processes executed at the virtual try-on apparatus 10 . . . are incorporated in advance into the ROM 88 or the like"; [0372], "in the form of files installable into these devices or executable at these devices, in a computer-readable storage medium"); generating a combination image (Osada, Paragraph [0107], "the generator 12D generates a composite image <read on combination image> of the try-on subject image and the selected clothing image") that depicts subject matter engaging with a first item of a first search result (Osada, Paragraph [0089], "The first display controller 12B displays images of clothing corresponding to the characteristic information acquired by the first acquisition unit 12A in the first information on a first display 24C of the first terminal 24"; Paragraph [0107], "the generator 12D generates a composite image of the try-on subject image shot by the first image-capturing unit 20A and the selected clothing image") and at least one of two or more second items of two or more second search results (Osada, Paragraph [0117], "When the second information accepted from the first terminal 24 includes a plurality of clothing IDs, that is, when the try-on subject selects images of a plurality of pieces of clothing to be tried on in combination, the generator 12D generates a composite image by superimposing the selected plurality of clothing images on the try-on subject image"), wherein the combination image incorporates a background of an image of a user-defined subject matter (Osada, Paragraph [0041], "The first image capturing unit 20A shoots a try-on subject to capture an image of the try-on subject"; [0116], "the generator 12D generates a composite image by superimposing the selected clothing image (corrected image) on a mirror image of the try-on subject image such that the try-on subject facing the second display 18 can check the composite image as if the try-on subject looks in a mirror"; [0340], “The screen design represents the background color of a display screen, the display size of at least one of data items and clothing images to be displayed on the display screen”), and wherein the subject matter comprises the user-defined subject matter (Osada, Paragraph [0116], "the generator 120 generates a composite image by superimposing the clothing image (corrected image) selected by the try-on subject corresponding to the posture information, on the try-on subject image shot at the same timing as that of the depth map used for the calculation of the posture information" <read on user-defined subject matter>); [[generating one or more user interface elements to be displayed in conjunction with the combination image; and enabling the one or more user interface elements to perform one or more operations comprising at least one of: accessing a source of the first item, adding the first item to a shopping ca rt, or purchasing the first item]]; wherein the combination image comprises one or more of: a two-dimensional image (Osada, Paragraph [0044], "the images of the tryon subject are bitmap images <read on two dimensional image> in the embodiment. The image of the try-on subject is an image with prescribed pixels values indicative of colors, brightness, and others"). But Osada does not explicitly disclose generating one or more user interface elements to be displayed in conjunct ion with the combination image; and enabling the one or more user interface elements to perform one or more operations comprising at least one of: accessing a source of the first item, adding the first item to a shopping cart, or purchasing the first item. However, Mo teaches generating one or more user interface elements to be displayed in conjunction with the combination image (Mo, Paragraph [0086], "the user terminal 100 preferably provides link information <read on user interface elements> together with image information <read on combination image>. Using the link information, a user can conveniently move to a web page, linked to a provided image, by clicking on the specific area or a random area of the image"); enabling the one or more user interface elements to perform one or more operations comprising at least one of: accessing a source of the first item (Mo, Paragraph [0086], "Using the link information, a user can conveniently move to a web page <read on accessing a source of the first item>, linked to a provided image, by clicking on the specific area or a random area of the image"); adding the first item to a shopping cart, or purchasing the first item (Mo, Paragraph [0126], "the information search unit 330 may recommend one or more products to a user for one or more of clothes images created through image simulation <read on combination image>. The user can reduce time and effort by purchasing one or more of the recommended products <read on purchasing the first item>") Mo and Osada are analogous since both are directed to image-based systems for displaying clothing items to users in a visual, interactive manner that connects product imagery to e-commerce actions. Osada provided a way of generating a composite (combination) image in which a captured image of a try-on subject is superimposed with one or more selected clothing images, including support for multiple garment layers t ried on in combination, and displaying those composite images to the user. Mo provided a way of embedding interactive link information and purchase-enabling user interface elements into images returned from product searches, allowing a user to directly navigate to the product source page or purchase a recommended product resulting from clothing image simulation. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to incorporate the interactive user interface elements enabling access to a product source and purchase operations taught by Mo into the modified virtual try-on system of Osada, such that the composite image displayed to the try-on subject would be accompanied by actionable interface controls enabling the user to access the source of a clothing item, add it to a shopping cart, or purchase it directly. The motivation is to enable a user to simply search for desired products based on accurate image search results and also compare the prices of the products conveniently, as discussed by Mo in Paragraph; and to allow the user to reduce time and effort by purchasing one or more of the recommended products directly, as discussed by Mo in Paragraph . Regarding Claim 2, the combination of Osada and Mo teaches the invention of Claim 1. The combination further teaches wherein the first search result comprises a plurality of first search results that each depict a respective different instance of the first item (Osada, Paragraph [0313], "the first display controller 138 displays on the first display 24C the clothing images corresponding to the characteristic information acquired by the first acquisition unit 12A in the first information"); and the combination image comprises a plurality of combination images, each of which is generated for a different one of the plurality of first search results to produce the user-defined subject matter engaging with different ones of the respective different instances of the first item (Osada, Paragraph [0234], "the generator 120 generates a composite image of the try-on subject image shot by the first image-capturing unit 20A and the clothing images corresponding to the clothing IDs in the second information"; [0236], "While the composite image is displayed at SEQ152, the generator 12D repeatedly executes the process for generating a composite image by combining the subject image continuously shot by the image-capturing unit 20 with the clothing images corresponding to the clothing IDs in the second information (refer to FIG. 5) received at SEQ146 and corresponding to the posture information calculated from the depth map obtained by the shooting. Then, each time a new composite image is generated by the generator 120, the second display controller 12E switches the composite images to be displayed on the second display 18"). Regarding Claim 3, the combination of Osada and Mo teaches the invention of Claim 1. The combination further teaches the user-defined subject matter comprises a person, a place, or a thing (Osada, Paragraph [0042), "the try-on subject may be a living thing or a non-living thing as far as it tries on clothing. The living thing may be a person <read on person>, for example"); the person comprises a user or a user-selected third party (Osada, Paragraph [0042), "the try-on subject may be a living thing or a non-living thing <read on user> wearing clothing"; Paragraph [0095],"the other try-on subjects selected in advance by the try-on subject are famous persons or celebrities <read on user-selected third party> preferred by the try-on subject, for example"); the first item comprises an item worn by the user or the user-selected third party (Osada, Paragraph [0043), "the clothing may be outer wears, skirts, pants, shoes, hats <read on item worn by the user or the user-selected third party>, and others"); the first item is selected from at least one of: clothing articles, accessories, or other items worn by the person (Osada, Paragraph [0066), "the kinds of clothing may include tops, outers, bottoms, and inners <read on clothing articles>, but are not limited to them"); [[ wherein the accessories are selected from at least one of: jewelry, eyewear, cosmetics, timepieces, piercing items, purses, bags, handbags, suitcases, or other carriables; the place or the thing is selected from at least one of: indoor structures, outdoor structures, or locations; the first item further comprises an item that is placed on or in the indoor structures, the outdoor structures, or the locations; and the first item is further selected from at least one of: furniture, carpeting, or art ]]. Osada does not explicitly disclose but Mo teaches wherein the accessories are selected from at least one of: jewelry, eyewear, cosmetics, timepieces, piercing items, purses, bags, handbags, suitcases, or other carriables {Mo, Paragraph [0096), "when product images, registered with on-line shopping malls, such as Ebay <read on other carriables>, are gathered, attribute information, such as the names or prices of relevant products, is received in keyword form"); the place or the thing is selected from at least one of: indoor structures, outdoor structures, or locations; the first item further comprises an item that is placed on or in the indoor structures, the outdoor structures, or the locations; and the first item is further selected from at least one of: furniture, carpeting, or art (Mo, Paragraph (0029), "the search server for receiving any one or more of the search term entry signal, the image selection signal and the image combination signal from the user terminal, performing searching using attribute information of an image <read on item placed on or in indoor structures, outdoor structures, or locations>, and transmitting search results, including images, to the user terminal"; Paragraph [0119), "clothes images are classified and stored according to the silhouette, pattern, color, brand, size <read on furniture, carpeting, or art as broadly categorized items stored and searchable by the system>"). Mo and Osada are analogous since both of them are dealing with image-based product search and display systems where users browse, select, and visualize items in connection with their own characteristics or preferences for the purpose of making an informed purchase. Osada provided a way of virtually trying on clothing items by generating a composite image of the try-on subject wearing selected clothing, filtered by body characteristics Mo provided a way of performing image-based search across broad product categories - including items sold in online shopping malls beyond clothing - using encoded attribute information to return accurate results. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate the broad product category coverage taught by Mo into the modified invention of Osada such that the virtual try-on and combination image system extends beyond clothing to include accessories, carriables, and items placed in indoor or outdoor structures such as furniture, carpeting, or art which expand the commercial utility of the system across all shoppable product categories and to conveniently compare the prices of the products with one another. Regarding Claim 4, the combination of Osada and Mo teaches the invention of Claim 1. The combination further teaches wherein the user-defined subject matter is defined by receiving an input that comprises (i) the image of the user-defined subject matter (Osada, Paragraph (0041), "the first image-capturing unit 20A shoots a try-on subject to capture an image of the try-on subject <read on the image of the user-defined subject matter>"); or (ii) a specification of one or more characteristics of the user-defined subject matter (Osada, Paragraph (0088), "the try-on subject operates the first terminal 24 to input the characteristic information <read on a specification of one or more characteristics of the user-defined subject matter>, the first terminal 24 transmits the characteristic information to the virtual try-on apparatus 10"; Paragraph (0069), "the characteristic information specifically includes at least one of outer characteristics and inner characteristics of the try-on subject <read on one or more characteristics of the user-defined subject matter>"). Regarding Claim 5, the combination of Osada and Mo teaches the invention of Claim 1. The combination further teaches [[ the first search result comprises one or more image-based search results that each depict a different instance of the first item]]. wherein the combination image is generated for the one or more image-based search results to produce one or more respective combination images that each depict a user wearing one of the different instances of the first item (Osada, Paragraph [0327], "the generator 12D generates a composite image <read on combination image> of the try-on subject image”; [0361], “the generator 15D generates a composite image in which the try-on subject image shot by the first image-capturing unit 20A and the clothing images corresponding to the clothing IDs included in the second information"; [0236], "the second display controller 12E switches the composite images <read on one or more respective combination images> to be displayed on the second display 18"). Osada does not explicitly disclose but Mo teaches the first search result comprises one or more image-based search results that each depict a different instance of the first item (Mo, Paragraph [0029], "the search server for receiving any one or more of the search term entry signal, the image selection signal and the image combination signal from the user terminal, performing searching using attribute information of an image, and transmitting search results, including images<read on image-based search results that each depict a different instance of the first item>, to the user terminal"). Mo and Osada are analogous since both of them are dealing with image-centric product retrieval and visualization systems that return item images to a user for preview and selection purposes. Osada provided a way of generating composite images by superimposing selected clothing images on a try-on subject image, enabling the user to visually preview each selected clothing item as worn. Mo provided a way of transmitting image-based search results - each containing images of matching products - to the user terminal in response to search input, enabling the user to view multiple distinct instances of an item. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate Mo's image-based multi-result search output into the composite image generation system of Osada such that a respective combination image is generated for each image-based search result, showing the user wearing each different instance of the item which will allow users to visually compare multiple instances of an item as worn and to conveniently compare the prices of the products with one another. Regarding Claim 6, the combination of Osada and Mo teaches the invention of Claim 5. The combination further teaches modifying the combination image in response to receiving an input to switch from a first user-defined subject matter engaging with the first item to a second user-defined subject matter engaging with the first item (Osada, Paragraph [0237), "the acceptor 12C determines whether an instruction for changing the composite images has been accepted ... the acceptor 12C registers in advance the try-on subject's motion of raising the right hand as an instruction for changing the composite images <read on receiving an input to switch from a first user-defined subject matter to a second user-defined subject matter engaging with the first item>"; Paragraph [0241],"the generator 120 searches the storage 14 for other second information including the try-on subject ID in the second information corresponding to the composite image previously displayed on the second display 18, and reads one piece of the second information not displayed in any composite image. Then, the generator 120 uses the read second information to generate a composite image <read on modifying the combination image in response to switching from a first user-defined subject matter to a second user-defined subject matter>"); wherein the subject matter comprises the first user-defined subject matter (Osada, Paragraph [0234], "the generator 12D generates a composite image of the try-on subject image shot by the first image-capturing unit 20A and the clothing images corresponding to the clothing IDs in the second information <read on the subject matter comprising the first user-defined subject matter>"). [[ wherein modifying the combination image further comprises calibrating, normalizing, or resizing the one or more image-based search results]]. But Osada does not explicitly disclose [[wherein modifying the combination image further comprises calibrating, normalizing, or resizing the one or more image-based search results]]. However, Mo teaches calibrating, normalizing, or resizing the one or more image-based search results (Mo, Paragraph [0119], "clothes images are classified and stored according to the silhouette, pattern, color, brand, size, etc. <read on calibrating, normalizing, or resizing image-based search results according to stored size and attribute classifications>"; Paragraph [0096], "attribute information, such as the names or prices of relevant products, is received in keyword form <read on normalizing image attribute data as part of the image-based search and retrieval process>"). Mo and Osada are analogous since both of them are dealing with image-based product search and virtual try-on systems where images of items must be consistently processed and standardized to be accurately displayed on a user's terminal in response to user inputs. Osada provided a way of accepting an input gesture from a try-on subject to switch the displayed composite image from one combination to another, with the system generating a new composite image using the updated selection. Mo provided a way of classifying and normalizing images and their attribute information according to standardized dimensions such as size, silhouette, color, and pattern, so that images retrieved by the system can be consistently matched and displayed. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate Mo's image normalization and size-based classification into Osaka’s subject-switching composite image generation such that when the combination image is modified by switching from a first to a second user-defined subject matter, the image-based search results are calibrated, normalized, or resized to properly fit the new subject matter which can ensure that item images accurately conform to the dimensional parameters of each different try-on subject to produce a natural result. Regarding Claim 7, the combination of Osada and Mo teaches the invention of Claim 1. The combination further teaches wherein the first item comprises clothing (Osada, Paragraph [0043], "the clothing may be outer wears, skirts, pants, shoes, hats <read on the first item comprising clothing>, and others"), wherein information is used to limit the first search result based on a clothing size or a clothing style (Osada, Paragraph [0069], "the characteristic information specifically includes at least one of outer characteristics and inner characteristics of the try-on subject"; [0070], "the outer characteristics may include body shape parameters indicative of the body shape of the try-on subject <read on information used to limit the search result based on clothing size>"; [0212], "the first display controller 128 reads the clothing images corresponding to the accepted characteristic information from the first information <read on using information to limit the first search result based on clothing size or style>"; [0066], "the kinds of clothing may include tops, outers, bottoms, and inners <read on clothing style>, but are not limited to them"). Regarding Claim 8, the combination of Osada and Mo teaches the invention of Claim 1. The combination further teaches wherein the one or more user interface elements further comprise one or more link elements for enabling a user to perform the one or more operations involving the first item displayed in the combination image (Osada, Paragraph [0131],"The first receiver 121 may receive from the first server device 28 the URL (uniform resource locator) of a web page on which the clothing image corresponding to the clothing ID included in the try-on information and the attribute information corresponding to the clothing image are arranged <read on a link element involving the first item displayed in the combination image>"; [0258], "the output unit 12J of the virtual try-on apparatus 10 converts the URL received from the first server device 28 into an image describing a one-dimensional code or a two-dimensional code, and outputs the same to the second display 18 <read on a user interface element comprising a link element enabling a user to perform operations involving the displayed clothing item>; [0130], "the try-on subject can receive various services such as discounts provided at the virtual store by inputting the code information through the input screen on a web page of the virtual store on the Internet <read on performing operations involving the first item displayed in the combination image via a link element>"). Regarding Claim 9, the combination of Osada and Mo teaches the invention of Claim 1. The combination further teaches wherein the operations further comprise filtering the first search result or a first user query based on information determined from a computer image-analysis of a user-specified image that depicts the user-defined subject matter (Osada, Paragraph [0099], "the second acquisition unit 12F acquires body shape parameters indicative of the body shape of the try-on subject"; [0104], "the second acquisition unit 12F calculates the body shape parameters of the try-on subject from the depth map of the try-on subject acquired from the second image capturing unit 208 <read on computer image-analysis of a user-specified image>"; [0212], "the first display controller 12B reads the clothing images corresponding to the accepted characteristic information from the first information <read on filtering the first search result based on information determined from image-analysis of the subject matter>"). Regarding Claim 10, the combination of Osada and Mo teaches the invention of Claim 9. The combination further teaches wherein the information determined from the computer image-analysis of the user-specified image comprises information about one or more size dimensions of a person, a place, or a thing (Osada, Paragraph [0100],, "the second acquisition unit 12F acquires body shape parameters indicative of the body shape of the try-on subject <read on information about one or more size dimensions of a person>"; [0284], "the first terminal 24 may acquire the body shape parameters from the virtual try-on apparatus 10 or the input unit 24A of the first terminal 24 <read on determining size dimension information through computer image-analysis of the user-specified image>"). [[the first search result or the first user query is modified to exclude items that are not compatible with or relevant to the one or more size dimensions]]. Osada does not explicitly disclose but Mo teaches the first search result or the first user query is modified to exclude items that are not compatible with or relevant to the one or more size dimensions (Mo, Paragraph [0119], "clothes images are classified and stored according to the silhouette, pattern, color, brand, size <read on storing and organizing items by size dimension> etc. The silhouettes are classified and stored according to the gender, coat or jacket, season, etc."; [0083], "a more detailed image can be created by including a brand logo or inputting size information <read on using size information to modify the user query and exclude incompatible items>"; (0096], "attribute information, such as the names or prices of relevant products, is received in keyword form <read on using size-related attribute information to filter and exclude non-compatible search results>"). Mo and Osada are analogous since both of them are dealing with image-based product search and virtual try-on systems where size-related attribute information is used to filter and refine the item results presented to the user. Osada provided a way of acquiring body shape parameters from depth-map image-analysis of the try-on subject and using those parameters to display only clothing images that correspond to the subject's characteristic information, implicitly excluding incompatible items. Mo provided a way of classifying and storing item images explicitly by size dimension, and allowing users to input size information during image simulation to further narrow search results - thereby excluding items incompatible with the specified dimensions. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention was made to incorporate Mo's explicit size-based item classification and size-input filtering into Osada's body-shape-parameter-driven image selection such that the first search result or first user query is modified to exclude items not compatible with or relevant to the one or more size dimensions determined from the computer image-analysis. The motivation is to prevent the user from being shown items that cannot fit or are not suited to their dimensions, thereby saving time and improving the usefulness of the system, as discussed by Mo in Paragraph, "a more detailed image can be created by including a brand logo or inputting size information <read on motivation to use size dimensions to exclude incompatible items from search results>." Regarding Claim 11, it recites limitations similar in scope to the limitations of Claim 1 but as a method and the combination of Osada and Mo teaches all the limitations as of Claim 1. Therefore is rejected under the same rationale. Regarding Claim 12, it recites limitations similar in scope to the limitations of Claim 2 and therefore is rejected under the same rationale. Regarding Claim 13, it recites limitations similar in scope to the limitations of Claim 3 and therefore is rejected under the same rationale. Regarding Claim 14, it recites limitations similar in scope to the limitations of Claim 4 and therefore is rejected under the same rationale. Regarding Claim 15, it recites limitations similar in scope to the limitations of Claim 5 and therefore is rejected under the same rationale. Regarding Claim 16, it recites limitations similar in scope to the limitations of Claim 6 and therefore is rejected under the same rationale. Regarding Claim 17, it recites limitations similar in scope to the limitations of Claim 7 and therefore is rejected under the same rationale. Regarding Claim 18, it recites limitations similar in scope to the limitations of Claim 8 and therefore is rejected under the same rationale. Regarding Claim 19, it recites limitations similar in scope to the limitations of Claim 9 and therefore is rejected under the same rationale. Regarding Claim 20, it recites limitations similar in scope to the limitations of Claim 10 and therefore is rejected under the same rationale. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US 20130073352 A1 DYNAMIC GROUP DISCOUNTING US 20140149280 A1 REAL-TIME MULTI MASTER TRANSACTION US 20140288963 A1 METHOD, APPARATUS AND COMPUTER PROGRAM PRODUCT FOR UPDATING ELECTRONIC MEDICAL RECORDS US 20150254416 A1 METHOD AND SYSTEM FOR PROVIDING MEDICAL ADVICE US 20150293382 A1 Method and System for Virtual Try-On and Measurement US 20150363890 A1 ACCOUNTING SYSTEM, COMPUTER READABLE MEDIA, AND METHODS US 20160189431 A1 VIRTUAL TRY-ON SYSTEM, VIRTUAL TRY-ON TERMINAL, VIRTUAL TRY-ON METHOD, AND COMPUTER PROGRAM PRODUCT Any inquiry concerning this communication or earlier communications from the examiner should be directed to YUJANG TSWEI whose telephone number is (571)272-6669. The examiner can normally be reached 8:30am-5:30pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kent Chang can be reached on (571) 272-7667. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /YuJang Tswei/Primary Examiner, Art Unit 2614
Read full office action

Prosecution Timeline

Aug 12, 2024
Application Filed
Feb 22, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579805
AUGMENTED, VIRTUAL AND MIXED-REALITY CONTENT SELECTION & DISPLAY FOR TRAVEL
2y 5m to grant Granted Mar 17, 2026
Patent 12579838
Perspective Distortion Correction on Faces
2y 5m to grant Granted Mar 17, 2026
Patent 12567213
COMPUTER VISION AND ARTIFICIAL INTELLIGENCE METHOD TO OPTIMIZE OVERLAY PLACEMENT IN EXTENDED REALITY
2y 5m to grant Granted Mar 03, 2026
Patent 12567189
RELATIONAL LOSS FOR ENHANCING TEXT-BASED STYLE TRANSFER
2y 5m to grant Granted Mar 03, 2026
Patent 12561930
PARAMETRIC EYEBROW REPRESENTATION AND ENROLLMENT FROM IMAGE INPUT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
99%
With Interview (+17.0%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 447 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month