Prosecution Insights
Last updated: April 19, 2026
Application No. 18/767,039

VIRTUAL WARDROBE AR EXPERIENCE

Non-Final OA §103§DP
Filed
Jul 09, 2024
Examiner
PRINGLE-PARKER, JASON A
Art Unit
2617
Tech Center
2600 — Communications
Assignee
Snap Inc.
OA Round
1 (Non-Final)
84%
Grant Probability
Favorable
1-2
OA Rounds
2y 5m
To Grant
96%
With Interview

Examiner Intelligence

Grants 84% — above average
84%
Career Allow Rate
456 granted / 546 resolved
+21.5% vs TC avg
Moderate +13% lift
Without
With
+12.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 5m
Avg Prosecution
25 currently pending
Career history
571
Total Applications
across all art units

Statute-Specific Performance

§101
9.5%
-30.5% vs TC avg
§103
44.3%
+4.3% vs TC avg
§102
24.5%
-15.5% vs TC avg
§112
12.0%
-28.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 546 resolved cases

Office Action

§103 §DP
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . DETAILED ACTION Double Patenting Claims 1, 19-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claim 14 of U.S. Patent No. 12062146. Although the claims at issue are not identical, they are not patentably distinct from each other because 18/767039 is strictly broader than 12062146. It is well settled that "anticipation is the epitome of obviousness," in re McDaniel, 293 F3d. 1379,1385 (Fed. Cir. 2002}{quoting Connell v. Sears Roebuck & Co,, 722 F.2d 1542, 1.548 (Fed. Cir. 1.983}}; In re Fracalossi, 681 F.2d 792, 794 (CCPA 1982). 18/767039 12062146 (17/815831) 1. A method comprising: accessing, by an application, an image depicting a real-world fashion item of a user; 1. A method comprising: accessing, by an application, an image depicting a real-world fashion item of a user; generating, by the application, a three-dimensional (3D) virtual fashion item based on the real-world fashion item depicted in the image; generating, by the application, a three-dimensional (3D) virtual fashion item based on the real-world fashion item depicted in the image; storing the 3D virtual fashion item in a database that includes a virtual wardrobe comprising a plurality of 3D virtual fashion items associated with the user; storing the 3D virtual fashion item in a database that includes a virtual wardrobe comprising a plurality of 3D virtual fashion items associated with the user; generating, by the application, an augmented reality (AR) experience that allows the user to interact with the virtual wardrobe, the AR experience comprising: receiving inputs from the user and a group of friends selecting portions of their respective virtual wardrobes; and generating, for display in a conversation interface, a group image comprising avatars representing the user and the group of friends together each respectively wearing the selected portions of the virtual wardrobes. and processing attributes of the plurality of 3D virtual fashion items to identify private 3D virtual fashion items and public 3D virtual fashion items. 14. (Original) The method of claim 13, further comprising: processing attributes of the plurality of 3D virtual fashion items to identify private 3D virtual fashion items and public 3D virtual fashion items, wherein the subset of the plurality of 3D virtual fashion items is selected from the public 3D virtual fashion items. Claims 2 is rejected on the ground of nonstatutory double patenting as being unpatentable over claim 14 of U.S. Patent No. 12062146. Although the claims at issue are not identical, they are not patentably distinct from each other because 12062146 claim 14 and 18/767039 claim 2 contain the same limitations but for minor language regarding a subset defining the private/public. 18/767039 12062146 (17/815831) 1. A method comprising: accessing, by an application, an image depicting a real-world fashion item of a user; 1. A method comprising: accessing, by an application, an image depicting a real-world fashion item of a user; generating, by the application, a three-dimensional (3D) virtual fashion item based on the real-world fashion item depicted in the image; generating, by the application, a three-dimensional (3D) virtual fashion item based on the real-world fashion item depicted in the image; storing the 3D virtual fashion item in a database that includes a virtual wardrobe comprising a plurality of 3D virtual fashion items associated with the user; storing the 3D virtual fashion item in a database that includes a virtual wardrobe comprising a plurality of 3D virtual fashion items associated with the user; 2. an augmented reality (AR) experience that allows the user to interact with the virtual wardrobe, the AR experience comprising: generating, by the application, an augmented reality (AR) experience that allows the user to interact with the virtual wardrobe, the AR experience comprising: 2. receiving inputs from the user and a group of friends selecting portions of their respective virtual wardrobes; and generating, for display in a conversation interface, a group image comprising avatars representing the user and the group of friends together each respectively wearing the selected portions of the virtual wardrobes. receiving inputs from the user and a group of friends selecting portions of their respective virtual wardrobes; and generating, for display in a conversation interface, a group image comprising avatars representing the user and the group of friends together each respectively wearing the selected portions of the virtual wardrobes. and processing attributes of the plurality of 3D virtual fashion items to identify private 3D virtual fashion items and public 3D virtual fashion items. 14. (Original) The method of claim 13, further comprising: processing attributes of the plurality of 3D virtual fashion items to identify private 3D virtual fashion items and public 3D virtual fashion items, wherein the subset of the plurality of 3D virtual fashion items is selected from the public 3D virtual fashion items. Claims 3-18 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-18 of U.S. Patent No. 12062146. Although the claims at issue are not identical, they are not patentably distinct from each other because they are obvious variations, being combinations of the dependent claims. Claim 3 of the current application is a combination of 12062146 claim 14+3, claim 4 of the current application is a combination of 12062146 claim 14+4, the remaining claims 5-13 and 18 match similarly. Claim 15 matches claim 14+16. Claim 16-17 matches claim 14+17. Allowable Subject Matter Claim 2 recites “receiving inputs from the user and a group of friends selecting portions of their respective virtual wardrobes and generating, for display in a conversation interface, a group image comprising avatars representing the user and the group of friends together each respectively wearing the selected portions of the virtual wardrobes “ and overcomes the prior art (see 17/815,831) but is rejected under Double Patenting above. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 3, 6-7, 9, 11-14, 16-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beckham U.S. Patent/PG Publication 2017001145 in view of Yu U.S. Patent/PG Publication 20150019992. Regarding claim 1 (independent): A method comprising: (Beckham Fig. 2A). accessing, by an application, (Beckham [0056] Turning to FIG. 2A, the computing device 20 is configured to receive, process, and store various information used to implement one or more software applications, such as client software application 30c. […] It will be understood that the hardware components of computing device 20 can include any appropriate device, examples of which include a portable computing device, such as a laptop, tablet or smart phone, or other computing devices, such as a desktop computing device or a server-computing device.)(Beckham Fig. 2C) an image depicting a real-world fashion item of a user (Beckham [0105] The acquired image scheme 263g compiles images of fashion items from various sources. Acquired images may be derived or pulled from 3.sup.rd party websites. Alternatively, acquired images may an image of a physical item of clothing taken via the user's camera on the user's computing device 20.). generating, by the application, a three-dimensional (3D) virtual fashion item based on the real-world fashion item depicted in the image (Beckham [0106] The archetype association scheme 263d creates fashion items based on a standard or predefined categories of garments archetypes. For instance, if the software application 30 determines that a pair of dark blue skinny jeans should be present as a possible virtual outfit, the software application associates the fashion items with a standard image or file type associated with blue jeans that have tapered or slim fit, and cause that particular garment item to be displayed via the user interface 28 of the computing device 20.)(Beckham [0096] Continuing with FIG. 3D, each virtual outfit 200 includes one or more fashion item models 220 that are generated utilizing an item model acquisition scheme 260. The item model acquisition scheme 260 can generate the fashion item model 220 and its component data (see FIG. 3B) for display via the user interface 28 of the user's computing device 20. Several item model acquisition schemes 260 are available including: a wire frame model scheme 262a; scanned fashion item data 262b; human wire frame model scheme 262c; fashion item archetype association 262d; footwear model scheme 263e; direct user input 262f; an acquired image scheme 262g; and a silhouette model scheme 263h. Typically, one item acquisition scheme is used to compile fashion item model 220. However, several different schemes may be used. In one embodiment, the software application 30 can utilize a wire frame model scheme 262a to compile fashion item models, as will be further detailed below. In one embodiment, the wire frame model scheme 262a can be used in combination with the footwear model scheme 263g. In other embodiments, the software application 30 can utilize scanned fashion item data schemes 262b to compile fashion item models. In still other embodiments, certain schemes may be used in combination with others. For instance, in one embodiment, the garment wire frame model scheme 262a can be used in combination with direct user input 262f.) storing the 3D virtual fashion item in a database that includes a virtual wardrobe comprising a plurality of 3D virtual fashion items associated with the user (Beckham [0063] Referring to FIG. 3A, as noted above, the system 1 includes one or more database 100 of wardrobe data for each user.). and processing attributes of the plurality of 3D virtual fashion items to identify private 3D virtual fashion items and public 3D virtual fashion items (Beckham [0191] The application allows a user to transmit fashion items to other users in the group via the communications link. The user can share specific fashion items they have in their closet or that they purchased from a retailer.) since only specific items are shared, some are private and some are public. Beckham discloses public and private items as describe above. However, for the purposes of compact prosecution and for further clarity, in a related field of endeavor, Yu teaches: and processing attributes of the plurality of 3D virtual fashion items to identify private 3D virtual fashion items and public 3D virtual fashion items (Yu [0041] The sixth column of the table 300 defines the status of the wardrobe item. The status column indicates whether the wardrobe item is available for the user 121 to wear. The status column may also indicate whether the user 121 let a friend or family member borrow the wardrobe item or the user 121 has borrowed the item from a friend or family member 127(a . . . n). The status also includes the location of the wardrobe item. For example, the location could be in a closet, in a drawer, on a shelf, at the dry-cleaners, in the laundry, in storage to name just a few.)(Yu The other user (i.e. friends and family 127(a . . . n)) can drag and drop wardrobe items 503(a . . . n), 505(a . . . n), 507(a . . . n), 509(a . . . n), 511(a . . . n) and 513(a . . . n) from user 121's wardrobe 500 to the mirror 701 or drag and drop items from his or her virtual wardrobe 611 to the mirror for user 121's viewing and borrowing of an item.)(Yu [0063] In response to request (807) the friend 125(a . . . n) can also allow the user 121 to borrow a wardrobe and forward (811) that wardrobe item to be stored in the storage 107 for the user 121. The borrowed wardrobe item is then transmitted (813) to the ensembling component 105.) since users can grants access to others allowing them to borrow items, therefore items have a private attribute (no borrowing access granted) or public (borrowing access granted). Therefore, it would have been obvious before the effective filing date of the claimed invention to grant sharing access as taught by Yu. The rationale for doing so would have been that it combines prior art elements according to known methods to yield predictable results since Beckham has fashion items where specific items can be shared but does not provide extensive detail on sharing, and Yu has fashion items where specific items can be shared, where there are predictable results since it both are providing clothing inventory management and sharing features for friends and Yu is merely adding additional sharing functionality. Therefore it would have been obvious to combine Yu with Beckham to obtain the invention. Regarding claim 6: The method of claim 1, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches further comprising generating a fashion profile for the user based on the virtual wardrobe (Beckham [0222] In FIG. 14, selection of dress me element 1104 can cause the user interface to display screen 3010 as shown in FIG. 25A. In FIG. 25A, the screen 3010 includes a plurality of event selection elements 3022. As illustrated, the event selection elements 3022 includes a work event element 3024, a casual event element 3026, a night out event 3028, and other event element 3030. Like event selection elements 1022 described above, the event selection elements 3022 each include a visual representation of the particular event. Furthermore, the visual representation of the event may also be representative of the user's selected lifestyle demographic selected in screen 1010 (FIG. 12A).)(Beckham [0148] The system and methods as described herein utilizes identifiable images to gather contextual information about the user's persona, activities, preferences, mood, style preferences, sizes, fit, etc. For example, determining a user's persona is accomplished through self-identification with lifestyle images (high school, working professional, mid-career, working mom, etc).). Regarding claim 7: The method of claim 6, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches further comprising: recommending, based on the fashion profile, a new fashion item to the user based on attributes of one or more 3D virtual fashion items included in the virtual wardrobe (Beckham [0052] In yet another example, the application can manage a physical wardrobe by facilitating: a) exchange of wardrobe items among multiple users; b) cleaning services for wardrobe items, c) disposition of wardrobe items for sale or donation, and/or d) accessing purchase information for items complementary to a given outfit.). Regarding claim 9: The method of claim 1, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches further comprising: determining a current mood (Beckham [0148] The system and methods as described herein utilizes identifiable images to gather contextual information about the user's persona, activities, preferences, mood, style preferences, sizes, fit, etc.). or weather associated with a location of the user (Beckham [0151] The outfit may be optionally compiled based on selected weather context.) identifying a set of 3D virtual fashion items that include attributes that match the current mood or the weather associated with the location of the user and presenting the set of 3D virtual fashion items as suggestions to the user to wear on a given day (Beckham [0152] A predefined virtual outfit has a defined lifestyle demographic type and a defined style genre. Each predefined virtual outfit includes a character set association with the plurality of events and the plurality of weather contexts. The character set association is a positive association when the predefined virtual outfit is selected for a particular event of the plurality of events and a particular weather context of one or more weather contexts. The character set association is a negative association when the predefined virtual outfit is not associated with the particular one of the plurality of events and the particular one of the weather contexts. In accordance with an embodiment of the present disclosure, the application compiles a number of the virtual outfits by utilizing a vector sum of the electronic fashion items with the lifestyle demographic type and the style genre that includes a positive association for the selected event and the selected weather context. )(Beckham [0179] In operation 520, the application can compile and display the virtual outfits for the planned travel occasion. The application may compile the virtual outfits based on fashion item model 220, item usage data 150, user wardrobe data, specific planned events, and weather information.) Regarding claim 11: The method of claim 1, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches further comprising: receiving input from the user that selects a subset of the 3D virtual fashion items and generating a listing for selling the selected subset of the 3D virtual fashion items (Beckham [0183] Referring to FIG. 9, a method 600 for managing the disposition of fashion items is illustrated. The method 600 is initiated in operation 602 when the user accesses the disposition planning portal displayed on the computing device. The user can access the disposition portion selecting a “Donate” or “Sell” element or by other means, such as through a main screen. Process control is transferred to operation 610. In operation 610, the application can identify infrequently used fashion items. An infrequently used fashion item may be a fashion item that is selected “n” times over predetermined period of time. In one example, “n” could be 0, 1, 5, or 10 depending on the circumstances and usage of the application. Process control is transferred to operation 620. [0184] In operation 620, the user interface can display the infrequently used fashion items. The user interface can display the infrequently used fashion item in a listing form or graphically as discussed above.). Regarding claim 12: The method of claim 1, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches comprising: receiving a real-time image depicting a person and overlaying one or more of the plurality of 3D virtual fashion items on the person depicted in the real-time image (Beckham [0157] In operation 388, the application adjusts the fashion item model to the human form model. Process control is transferred to operation 370. In operation 370, the application overlies the fashion item model on the human form model, as illustrated in FIGS. 7D and 7F.). Beckham discloses a real-time overlay as describe above. However, for the purposes of compact prosecution and for further clarity, in a related field of endeavor, Yu teaches: receiving a real-time image depicting a person and overlaying one or more of the plurality of 3D virtual fashion items on the person depicted in the real-time image (Yu [0049] Referring now to FIG. 7, a user interface 700 of an embodiment of the present invention is illustrated. The user 121 having logged into digital closet system 100 is presented on his or her display a rendering of their wardrobe in interface 500. The user may then drag any number of wardrobe items to a virtual mirror displaying an image of user 121.). Therefore, it would have been obvious before the effective filing date of the claimed invention to have real-time overlay as taught by Yu. The rationale for doing so would have been that it combines prior art elements according to known methods to yield predictable results, since Beckham has a user that is overlaid with virtual clothing, and Yu has a user that is overlaid with virtual clothing where it is more explicitly real-time, where there are predictable results since both are taking a person and overlaying clothing using similar hardware. Therefore it would have been obvious to combine Yu with Beckham to obtain the invention. Regarding claim 13: The method of claim 1, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches further comprising: receiving input from the user to select a subset of the plurality of 3D virtual fashion items to share with one or more friends of the user (Beckham [0191] The user can share specific fashion items they have in their closet or that they purchased from a retailer. Furthermore, the application can interface with 3.sup.rd party websites, e.g. invitation websites. In addition, the application can coordinate attire, colors, etc., for all users in the group. ). Regarding claim 14: The method of claim 1, has all of its limitations taught by Beckham in view of Yu. Yu further teaches further comprising restricting sharing of the private 3D virtual fashion items to a preselected set of friends of the user (Yu [0063] The user 121 in using embodiments of the present invention can have the ensembling component 105 request (807) a friend 125(a . . . n) give input of a ensemble 205 the user has put together. The friend 125(a . . . n) has advice about the ensemble 205 transmitted (809) back to the ensembling component 105. In response to request (807) the friend 125(a . . . n) can also allow the user 121 to borrow a wardrobe and forward (811) that wardrobe item to be stored in the storage 107 for the user 121. The borrowed wardrobe item is then transmitted (813) to the ensembling component 105.) since the user has to approve the friend borrowing. Therefore, it would have been obvious before the effective filing date of the claimed invention to grant sharing access as taught by Yu. The rationale for doing so would have been that it combines prior art elements according to known methods to yield predictable results since Beckham has fashion items where specific items can be shared but does not provide extensive detail on sharing, and Yu has fashion items where specific items can be shared, where there are predictable results since it both are providing clothing inventory management and sharing features for friends and Yu is merely adding additional sharing functionality. Therefore it would have been obvious to combine Yu with Beckham to obtain the invention. Regarding claim 16: The method of claim 15, has all of its limitations taught by Beckham in view of Yu and Chen. Beckham further teaches further comprising: displaying a borrow option in association with a given 3D virtual fashion item of one of the friends (Beckham [0191] The user can share specific fashion items they have in their closet or that they purchased from a retailer. Furthermore, the application can interface with 3.sup.rd party websites, e.g. invitation websites. In addition, the application can coordinate attire, colors, etc., for all users in the group.)(Beckham Fig. 8 870) Beckham discloses borrowing describe above. However, for the purposes of compact prosecution and for further clarity, in a related field of endeavor, Yu teaches: displaying a borrow option in association with a given 3D virtual fashion item of one of the friends (Yu [0041] The sixth column of the table 300 defines the status of the wardrobe item. The status column indicates whether the wardrobe item is available for the user 121 to wear. The status column may also indicate whether the user 121 let a friend or family member borrow the wardrobe item or the user 121 has borrowed the item from a friend or family member 127(a . . . n). The status also includes the location of the wardrobe item. For example, the location could be in a closet, in a drawer, on a shelf, at the dry-cleaners, in the laundry, in storage to name just a few.)(Yu [0060] The user 121 having not found an ensemble 205 that meets his or her needs can receive additional wardrobe items from (family or friends 127(a . . . n) or retailers/designers 739. The user 121 can solicit or request 741 advice from another user 127(a . . . n). The requested advice can allow the other user (i.e. friends and family 127(a . . . n)) through the conduit of a social network 117(a . . . n) to view the image 702. The other user (i.e. friends and family 127(a . . . n)) can drag and drop wardrobe items 503(a . . . n), 505(a . . . n), 507(a . . . n), 509(a . . . n), 511(a . . . n) and 513(a . . . n) from user 121's wardrobe 500 to the mirror 701 or drag and drop items from his or her virtual wardrobe 611 to the mirror for user 121's viewing and borrowing of an item.) Therefore, it would have been obvious before the effective filing date of the claimed invention to grant sharing access as taught by Yu. The rationale for doing so would have been that it combines prior art elements according to known methods to yield predictable results since Beckham has fashion items where specific items can be shared but does not provide extensive detail on sharing, and Yu has fashion items where specific items can be shared, where there are predictable results since it both are providing clothing inventory management and sharing features for friends and Yu is merely adding additional sharing functionality. Therefore it would have been obvious to combine Yu with Beckham to obtain the invention. Regarding claim 17: The method of claim 16, has all of its limitations taught by Beckham in view of Yu. Yu further teaches comprising: in response to receiving input from the user that selects the borrow option, generating a message to the one of the friends requesting a corresponding physical fashion item corresponding to the given 3D virtual fashion item (Yu [0060] The user 121 having not found an ensemble 205 that meets his or her needs can receive additional wardrobe items from (family or friends 127(a . . . n) or retailers/designers 739. The user 121 can solicit or request 741 advice from another user 127(a . . . n). The requested advice can allow the other user (i.e. friends and family 127(a . . . n)) through the conduit of a social network 117(a . . . n) to view the image 702. The other user (i.e. friends and family 127(a . . . n)) can drag and drop wardrobe items 503(a . . . n), 505(a . . . n), 507(a . . . n), 509(a . . . n), 511(a . . . n) and 513(a . . . n) from user 121's wardrobe 500 to the mirror 701 or drag and drop items from his or her virtual wardrobe 611 to the mirror for user 121's viewing and borrowing of an item.)(Yu [0063] The user 121 in using embodiments of the present invention can have the ensembling component 105 request (807) a friend 125(a . . . n) give input of a ensemble 205 the user has put together. The friend 125(a . . . n) has advice about the ensemble 205 transmitted (809) back to the ensembling component 105. In response to request (807) the friend 125(a . . . n) can also allow the user 121 to borrow a wardrobe and forward (811) that wardrobe item to be stored in the storage 107 for the user 121. The borrowed wardrobe item is then transmitted (813) to the ensembling component 105.). Therefore, it would have been obvious before the effective filing date of the claimed invention to grant sharing access as taught by Yu. The rationale for doing so would have been that it combines prior art elements according to known methods to yield predictable results since Beckham has fashion items where specific items can be shared but does not provide extensive detail on sharing, and Yu has fashion items where specific items can be shared, where there are predictable results since it both are providing clothing inventory management and sharing features for friends and Yu is merely adding additional sharing functionality. Therefore it would have been obvious to combine Yu with Beckham to obtain the invention. Regarding claim 18: The method of claim 1, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches further comprising receiving input that selects a given 3D virtual fashion item from the virtual wardrobe to promote to other users, wherein the other users purchase the given 3D virtual fashion item by interacting with the virtual wardrobe of the user (Beckham [0185] Referring to operation 640, one disposition method can include the sale of the fashion items. The application is configured to permit the sale of items to a limited number of networked user's. For instance, the application can associate one or more selected network users as users that can purchase items from a selling user's virtual closet. In one example, only a subset of the user's networked “friends” can have access to fashion items for sale. In an alternative embodiment, each fashion item can be offered for sale to all users of the application, including networked users and non-networked users. In such an embodiment, only the details concerning the particular fashion item for sale are accessible. For instance, when the user indicates that certain fashion items are available for sale to the entire population of application users, the application can create a listing or matrix of fashion items for sale based on the fashion item data stored in the database.). Regarding claim 19 (independent): The claim is a parallel version of claim 1. As such it is rejected under the same teachings. Regarding claim 20 (independent): The claim is a parallel version of claim 1. As such it is rejected under the same teachings. Claim(s) 3-4, 15 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beckham U.S. Patent/PG Publication 2017001145 in view of Yu U.S. Patent/PG Publication 20150019992 and Chen U.S. Patent/PG Publication 20200320769. Regarding claim 3: The method of claim 1, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches further comprising processing the image depicting the real-world fashion item by a (Beckham [0106] The archetype association scheme 263d creates fashion items based on a standard or predefined categories of garments archetypes. For instance, if the software application 30 determines that a pair of dark blue skinny jeans should be present as a possible virtual outfit, the software application associates the fashion items with a standard image or file type associated with blue jeans that have tapered or slim fit, and cause that particular garment item to be displayed via the user interface 28 of the computing device 20.)(Beckham [0096] Continuing with FIG. 3D, each virtual outfit 200 includes one or more fashion item models 220 that are generated utilizing an item model acquisition scheme 260. The item model acquisition scheme 260 can generate the fashion item model 220 and its component data (see FIG. 3B) for display via the user interface 28 of the user's computing device 20. Several item model acquisition schemes 260 are available including: a wire frame model scheme 262a; scanned fashion item data 262b; human wire frame model scheme 262c; fashion item archetype association 262d; footwear model scheme 263e; direct user input 262f; an acquired image scheme 262g; and a silhouette model scheme 263h. Typically, one item acquisition scheme is used to compile fashion item model 220. However, several different schemes may be used. In one embodiment, the software application 30 can utilize a wire frame model scheme 262a to compile fashion item models, as will be further detailed below. In one embodiment, the wire frame model scheme 262a can be used in combination with the footwear model scheme 263g. In other embodiments, the software application 30 can utilize scanned fashion item data schemes 262b to compile fashion item models. In still other embodiments, certain schemes may be used in combination with others. For instance, in one embodiment, the garment wire frame model scheme 262a can be used in combination with direct user input 262f.) Beckham does not teach using a neural network. In a related field of endeavor, Chen teaches: further comprising processing the image depicting the real-world fashion item by a neural network (Chen[0253] Achieving an accurate garment physics simulation is essential for rendering a photo-realistic virtual avatar image. We can first predict garment attributes (e.g. colour, pattern, material type, washing method) using a machine learning model, such as the deep neural network classifiers or regressors described in Section 2, from one or more garment images and/or garment texture samples, and then map them to a number of fabric physical properties (e.g. stiffness, elasticity, friction parameters) and/or model parameters of the 3D physics model. The garment attribute predictor can be used to initialize the model parameters of the garment physics simulator from the predicted physics attributes or material parameters so that a more accurate draping simulation can be achieved. Fig.5 shows an illustration of an example of using image-based garment attribute prediction to initialize the model parameters for precise garment physics simulation.). Therefore, it would have been obvious before the effective filing date of the claimed invention to use a neural network as taught by Chen. The rationale for doing so would have been that it is a simple substitution of processing forms, where neural networks are commonly substituted for previous processing forms, where there is a predictable result since the input and outputs are the same. Therefore it would have been obvious to combine Chen with Beckham to obtain the invention. Regarding claim 4: The method of claim 3, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches further comprising training the neural network to generate virtual fashion items, the neural network configured to establish a relationship between images depicting real-world fashion items and 3D virtual fashion items of the real-world fashion items (Beckham [0106] The archetype association scheme 263d creates fashion items based on a standard or predefined categories of garments archetypes. For instance, if the software application 30 determines that a pair of dark blue skinny jeans should be present as a possible virtual outfit, the software application associates the fashion items with a standard image or file type associated with blue jeans that have tapered or slim fit, and cause that particular garment item to be displayed via the user interface 28 of the computing device 20.)(Beckham [0096] Continuing with FIG. 3D, each virtual outfit 200 includes one or more fashion item models 220 that are generated utilizing an item model acquisition scheme 260. The item model acquisition scheme 260 can generate the fashion item model 220 and its component data (see FIG. 3B) for display via the user interface 28 of the user's computing device 20. Several item model acquisition schemes 260 are available including: a wire frame model scheme 262a; scanned fashion item data 262b; human wire frame model scheme 262c; fashion item archetype association 262d; footwear model scheme 263e; direct user input 262f; an acquired image scheme 262g; and a silhouette model scheme 263h. Typically, one item acquisition scheme is used to compile fashion item model 220. However, several different schemes may be used. In one embodiment, the software application 30 can utilize a wire frame model scheme 262a to compile fashion item models, as will be further detailed below. In one embodiment, the wire frame model scheme 262a can be used in combination with the footwear model scheme 263g. In other embodiments, the software application 30 can utilize scanned fashion item data schemes 262b to compile fashion item models. In still other embodiments, certain schemes may be used in combination with others. For instance, in one embodiment, the garment wire frame model scheme 262a can be used in combination with direct user input 262f.) Beckham in view of Yu does not teach using a neural network. In a related field of endeavor, Chen teaches: further comprising training the neural network to generate virtual fashion items, the neural network configured to establish a relationship between images depicting real-world fashion items and 3D virtual fashion items of the real-world fashion items (Chen[0253] Achieving an accurate garment physics simulation is essential for rendering a photo-realistic virtual avatar image. We can first predict garment attributes (e.g. colour, pattern, material type, washing method) using a machine learning model, such as the deep neural network classifiers or regressors described in Section 2, from one or more garment images and/or garment texture samples, and then map them to a number of fabric physical properties (e.g. stiffness, elasticity, friction parameters) and/or model parameters of the 3D physics model. The garment attribute predictor can be used to initialize the model parameters of the garment physics simulator from the predicted physics attributes or material parameters so that a more accurate draping simulation can be achieved. Fig.5 shows an illustration of an example of using image-based garment attribute prediction to initialize the model parameters for precise garment physics simulation.). Therefore, it would have been obvious before the effective filing date of the claimed invention to use a neural network as taught by Chen. The rationale for doing so would have been that it is a simple substitution of processing forms, where neural networks are commonly substituted for previous processing forms, where there is a predictable result since the input and outputs are the same. Therefore it would have been obvious to combine Chen with Beckham in view of Yu to obtain the invention. Regarding claim 15: The method of claim 1, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches further comprising: in response to accessing the image depicting the real-world fashion item, determining that a previously generated version of the 3D virtual fashion item is not available in response to determining that the previously generated version of the 3D virtual fashion item is not available, requesting additional images of the real-world fashion item and applying the additional images to a (Beckham [0105] The acquired image scheme 263g compiles images of fashion items from various sources. Acquired images may be derived or pulled from 3.sup.rd party websites. Alternatively, acquired images may an image of a physical item of clothing taken via the user's camera on the user's computing device 20. Acquired image schemes may require additional direct input from the user regarding item data. Utilizing acquired images to create wardrobe data may not be advantageous or desirable for some users. For instance, acquired images may require additional user input that is cumbersome and time-consuming.) and establishing a conversation interface between the user and a group of friends (Beckham [0191] The application also facilitates communication between and among the users within the event group. In one example, the application can facilitate a specific group chat for the planned event. This allows the users within the group to communicate with one another, regarding transportation to the events, etc., such communication can reduce numerous emails, group text messages, etc. Instead, the application incorporates communications among the group all into a single communications link. The application allows a user to transmit fashion items to other users in the group via the communications link. The user can share specific fashion items they have in their closet or that they purchased from a retailer. Furthermore, the application can interface with 3.sup.rd party websites, e.g. invitation websites. In addition, the application can coordinate attire, colors, etc., for all users in the group.). Beckham in view of Yu does not teach using a neural network. In a related field of endeavor, Chen teaches: machine learning model (Chen[0253] Achieving an accurate garment physics simulation is essential for rendering a photo-realistic virtual avatar image. We can first predict garment attributes (e.g. colour, pattern, material type, washing method) using a machine learning model, such as the deep neural network classifiers or regressors described in Section 2, from one or more garment images and/or garment texture samples, and then map them to a number of fabric physical properties (e.g. stiffness, elasticity, friction parameters) and/or model parameters of the 3D physics model. The garment attribute predictor can be used to initialize the model parameters of the garment physics simulator from the predicted physics attributes or material parameters so that a more accurate draping simulation can be achieved. Fig.5 shows an illustration of an example of using image-based garment attribute prediction to initialize the model parameters for precise garment physics simulation.). Therefore, it would have been obvious before the effective filing date of the claimed invention to use machine learning/neural networks as taught by Chen. The rationale for doing so would have been that it is a simple substitution of processing forms, where neural networks are commonly substituted for previous processing forms, where there is a predictable result since the input and outputs are the same. Therefore it would have been obvious to combine Chen with Beckham in view of Yu to obtain the invention. Claim(s) 5 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beckham U.S. Patent/PG Publication 2017001145 in view of Yu U.S. Patent/PG Publication 20150019992 and Wilson U.S. Patent/PG Publication 20210383548. Regarding claim 5: The method of claim 4, has all of its limitations taught by Beckham in view of Yu and Chen. Beckham in view of Yu and Chen does not teach ground-truth. In a related field of endeavor, Wilson teaches: further comprising performing training operations comprising: receiving training data comprising a plurality of training images depicting a training fashion items and ground-truth 3D virtual fashion items of the training fashion items applying the neural network to a first training image of the plurality of training images that depicts a first training fashion item to estimate a 3D virtual fashion item obtaining the ground-truth 3D virtual fashion item corresponding to the first training image comparing the estimated 3D virtual fashion item to the ground-truth 3D virtual fashion item to compute a deviation and updating parameters of the neural network based on the computed deviation (Wilson [0178] Embodiments described herein may comprise operations that when executed control a processor to perform operations for training a deep learning model, including for example, a deep learning noise reduction model or a deep learning ocular structure segmentation model. In various embodiments, the deep learning model is trained and tested using a training set of images and a testing set of images. A ground truth label associated with each member of a training set and testing set may be known or accessed by various embodiments. Training the deep learning model may include training the deep learning model until a loss function stops minimizing, until a threshold level of accuracy is achieved, until a threshold time has been spent training the deep learning model, until a threshold amount of computational resources have been expended training the deep learning model, or until a user terminates training. Other training termination conditions may be employed. Training a deep learning model may also include determining which deep learning model operating parameters are most discriminative in distinguishing a first class from a second class (e.g., ocular structure, background, or noise, not-noise). Training the deep learning model may also include determining settings outside the deep learning model architecture but relevant to its learning behavior. ). Therefore, it would have been obvious before the effective filing date of the claimed invention to use a neural network as taught by Wilson. The rationale for doing so would have been that it is a simple substitution of processing forms, where neural networks are commonly substituted for previous processing forms, where there is a predictable result since the input and outputs are the same. Therefore it would have been obvious to combine Wilson with Beckham in view of Yu and Chen to obtain the invention. Claim(s) 8 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beckham U.S. Patent/PG Publication 2017001145 in view of Yu U.S. Patent/PG Publication 20150019992 and Caldwell U.S. Patent/PG Publication 20090157479. Regarding claim 8: The method of claim 6, has all of its limitations taught by Beckham in view of Yu. Beckham further teaches further comprising: searching fashion profiles of one or more other users based (Beckham [0165] As illustrated, screen 1120 includes user closet portal 1122, which provide access the user's wardrobe data, and a “friends closet” portal 1124, which provides access to the networked users virtual closets.). Beckham in view of Yu does not teach searching for similar users. In a related field of endeavor, Caldwell teaches: searching fashion profiles of one or more other users based on the fashion profile of the user and identifying a set of users having similar style as the user based on matching attributes of the fashion profile of the user with attributes of the fashion profiles of the one or more other users (Caldwell [0053] A further embodiment of the invention alternatively or additionally employs a second strategy, which can be considered a more adaptive strategy. The adaptive strategy operates with less particularized tagging and requires identifications only of relative color. size. product type, and the like. Answers provided in reply to the Personal Preference Questionnaire and the Physical Profile and Preference Data form would be used, not as a way to tag products, but instead as a way to associate prospective consumer purchases or recommendations with purchases of other consumers sharing similar attributes with the shopper at hand. With this, recommendations can be provided according to what persons having similar characteristics have bought previously. For example, the system 10 could provide a recommendation as follows: "People who call themselves Chic buy a lot of product X. Therefore, recommend product X to other people who describe themselves as Chic".). Therefore, it would have been obvious before the effective filing date of the claimed invention to search for similar users as taught by Caldwell. The rationale for doing so would have been that it combines prior art elements according to known methods to yield predictable results since Beckham has a virtual wardrobe with fashion profiles and additional users, and Caldwell has a virtual wardrobe with fashion profiles and additional users, where Caldwell adds a system of searching for other users, which would have predictable results in Beckham since it is merely providing additional functionality using features that already exist in Beckham. Therefore it would have been obvious to combine Caldwell with Beckham in view of Yu to obtain the invention. Claim(s) 10 is/are rejected under 35 U.S.C. 103 as being unpatentable over Beckham U.S. Patent/PG Publication 2017001145 in view of Yu U.S. Patent/PG Publication 20150019992 and Wilson U.S. Patent/PG Publication 20210383548. Regarding claim 10: The method of claim 1, has all of its limitations taught by Beckham in view of Yu. Beckham in view of Yu does not teach ground-truth. In a related field of endeavor, Wilson teaches: further comprising: receiving training data comprising a plurality of training images depicting a training fashion items and ground-truth 3D virtual fashion items of the training fashion items applying a neural network to a first training image of the plurality of training images that depicts a first training fashion item to estimate a 3D virtual fashion item and updating parameters of the neural network based on a deviation between the estimated 3D virtual fashion item and the ground-truth 3D virtual fashion item, the 3D virtual fashion item being generated by applying the neural network to the image depicting the real-world fashion item (Wilson [0178] Embodiments described herein may comprise operations that when executed control a processor to perform operations for training a deep learning model, including for example, a deep learning noise reduction model or a deep learning ocular structure segmentation model. In various embodiments, the deep learning model is trained and tested using a training set of images and a testing set of images. A ground truth label associated with each member of a training set and testing set may be known or accessed by various embodiments. Training the deep learning model may include training the deep learning model until a loss function stops minimizing, until a threshold level of accuracy is achieved, until a threshold time has been spent training the deep learning model, until a threshold amount of computational resources have been expended training the deep learning model, or until a user terminates training. Other training termination conditions may be employed. Training a deep learning model may also include determining which deep learning model operating parameters are most discriminative in distinguishing a first class from a second class (e.g., ocular structure, background, or noise, not-noise). Training the deep learning model may also include determining settings outside the deep learning model architecture but relevant to its learning behavior. ). Therefore, it would have been obvious before the effective filing date of the claimed invention to use a neural network as taught by Wilson. The rationale for doing so would have been that it is a simple substitution of processing forms, where neural networks are commonly substituted for previous processing forms, where there is a predictable result since the input and outputs are the same. Therefore it would have been obvious to combine Wilson with Beckham in view of Yu and Chen to obtain the invention. Conclusion For the prior art referenced and the prior art considered pertinent to Applicant’s disclosure but not relied upon, see PTO-892 “Notice of References Cited”. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JASON PRINGLE-PARKER whose telephone number is (571) 272-5690 and e-mail is jason.pringle-parker@uspto.gov. The examiner can normally be reached on 8:30am-5:00pm est Monday-Friday. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, King Poon can be reached on (571) 270-0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, seehttp://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JASON A PRINGLE-PARKER/ Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Jul 09, 2024
Application Filed
Jan 23, 2026
Non-Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603978
SYSTEM AND METHOD FOR PARALLAX CORRECTION FOR VIDEO SEE-THROUGH AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12597181
HIGH DYNAMIC RANGE DIGITAL IMAGE EDITING VISUALIZATIONS
2y 5m to grant Granted Apr 07, 2026
Patent 12597210
GENERATING POLYGON MESHES APPROXIMATING SURFACES WITH SUB-CELL FEATURES
2y 5m to grant Granted Apr 07, 2026
Patent 12592008
INFORMATION ANALYSIS SYSTEM, INFORMATION ANALYSIS METHOD, AND NON-TRANSITORY RECORDING MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12586205
SYSTEM AND METHOD FOR DETECTING A BOUNDARY IN IMAGES USING MACHINE LEARNING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
84%
Grant Probability
96%
With Interview (+12.7%)
2y 5m
Median Time to Grant
Low
PTA Risk
Based on 546 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month