Prosecution Insights
Last updated: April 19, 2026
Application No. 18/378,593

SYSTEMS AND METHODS FOR USING MACHINE LEARNING MODELS TO EFFECT VIRTUAL TRY-ON AND STYLING ON ACTUAL USERS

Non-Final OA §102
Filed
Oct 10, 2023
Examiner
LETT, THOMAS J
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Zelig Technology LLC
OA Round
1 (Non-Final)
83%
Grant Probability
Favorable
1-2
OA Rounds
2y 8m
To Grant
47%
With Interview

Examiner Intelligence

Grants 83% — above average
83%
Career Allow Rate
599 granted / 719 resolved
+21.3% vs TC avg
Minimal -36% lift
Without
With
+-36.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
26 currently pending
Career history
745
Total Applications
across all art units

Statute-Specific Performance

§101
11.1%
-28.9% vs TC avg
§103
27.4%
-12.6% vs TC avg
§102
47.6%
+7.6% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 719 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after allowance or after an Office action under Ex Parte Quayle, 25 USPQ 74, 453 O.G. 213 (Comm'r Pat. 1935). Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, prosecution in this application has been reopened pursuant to 37 CFR 1.114. Applicant's submission filed on 18 August 2025 has been entered. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim 1 is rejected under 35 U.S.C. 102(a)(2) as being anticipated by Li et al. (US 20230086880 A1 having a provisional document 63245935 filed September 20, 2012). Regarding claim 1, Li et al. discloses a method of styling virtual try-on of articles of clothing (image-based try-on system adjusting position and dimensions of each garment, paras. 0013, 0097), the method comprising: selecting a subset of a large dataset of images of garments from a pre-existing database, wherein the subset of the large data set of images are labelled with a specific style (apply the pre-trained DeepFashion2 network to person images b to obtain the ground truth key points K for training. We require each b in the dataset to be paired with a garment representation A (such paired dataset can be easily acquired on a fashion retailer's website), para. 0111, garment category metadata a.sub.t in A allows us to identify the garment key point K, that corresponds to the garment A, (as a person b may be wearing multiple garments)); generating a digital taxonomy having one or more labels associated with each image in the subset of the large dataset of images, wherein at least one of the one or more labels corresponds to the specific style (the following classes: background, hair, face, neckline, right arm, left arm, right shoe, left shoe, right leg, left leg, bottoms, full body, tops, outerwear, bags, belly, para. 0096); training a style classifier using the digital taxonomy applied to the subset of the large dataset of images of garments (Key Points Predictor Network G.sub.k takes in the garment representation A, the body pose representation b.sub.p, the control parameter Z, and outputs the garment key points K′=G.sub.k (A, b.sub.p, Z). Note that Z was broadcast onto a 2D plane of the same size as b.sub.p and concatenated with other inputs. G.sub.k uses the identical architecture of G.sub.f, with the exception that its output is of the shape N×n×2 where N is the batch size and n is the number of key points. Following the training procedure of G.sub.f, we train the network using L.sub.1 loss, L.sub.2, and the structure loss L.sub.s computed between K′ and K and compute the control parameter loss L.sub.z, para. 0114); and generating a first dataset based on executing the trained style classifier on the large dataset of images of garments (using trained networks, the system iteratively generates an image of a model wearing a complete Outfit, para. 0016), the first dataset comprising a plurality of images of garments meeting a threshold probability of depicting the specific style (Second Most Likely Class. From the Softmax output B″.sub.m, we first set the value of the channel that corresponds to the garment to zero, para. 0140). Claims 21-27 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Kapur et al. (US 20120299912 A1). Regarding claim 21, Kapur et al. discloses a method of enhancing an e-commerce retail experience with a shopper avatar (enable a customer to `try on`, virtually, all manner of wearable articles. Such articles may include clothing, eyewear, footwear, accessories, prostheses, jewelry, tattoos, and/or make-up, as examples. Having augmented her avatar with such articles in virtualized form, the customer may be able to predict how she might look when wearing the corresponding actual articles. This approach can be used to pre-screen the articles prior to an actual visit to the fitting room, to save time. In addition, the customer may elect to share with others the images of her avatar with the attached virtual articles, para. 0015), the method comprising: retrieving a stored outfit selected by a user, the stored outfit comprising at least one garment (wearable article is selected by invoking a selection engine of a local or remote computing system, such as computing system 16. The selection engine is configured to select the wearable article from an inventory of articles accessible to the user, para. 0037); generating a shopper avatar wearing the stored outfit based on a physique of the user (generating a posable, three-dimensional, virtual avatar to substantially resemble the user. The avatar may be constructed based at least partly on the image(s) received, para. 0023, figure 2); providing the shopper avatar wearing the stored outfit for display to the user in a user interface (a virtualized form of the wearable article is sized to fit the user's body, para. 0046; virtualized forms of boots 84 and jacket 86 are attached to the avatar shown on a display interface 18A, para. 0048); receiving a selection of a new garment by the user, the selection initiating a virtual try-on of the new garment to replace the at least one garment with the new garment based on the new garment and the at least one garment having a same type of clothing (virtualized forms of a first, second, third wearable article, etc., may be attached to the avatar, para. 0047); configuring the shopper avatar to wear the new garment according to one or more styling options (wearable article is selected for the user via input from another person. In one embodiment, the other person may be a friend or relative of the user. The other person may be selected by a local or remote computing system, such as computing system 16A, based on suitable criteria. For example, the other person may be selected because that person exhibits purchasing behaviors similar to that of the user--a history of purchasing the same actual items, or items of the same style or price range, para. 0034); and providing the configured shopper avatar wearing the new garment for display to the user in the user interface (the methods may be enabled by different configurations. The methods may be entered upon any time system 10 is operating, and may be executed repeatedly, para. 0020). Regarding claim 21, Kapur et al. discloses the method of claim 21 further comprising: generating the user interface in a web browser, the user interface overlaid on an online retailer e-commerce website, and wherein the selection of the new garment by the user comprises a user interaction in the web browser on the online retailer e-commerce website (internet service, para. 0037). Regarding claim 23, Kapur et al. discloses the method of claim 21 further comprising: generating the user interface on a mobile device, the user interface overlaid on an online retailer e-commerce application, and wherein the selection of the new garment by the user comprises a user interaction in the online retailer e-commerce application (paras. 0057, 0065). Regarding claim 24, Kapur et al. discloses the method of claim 21, wherein each garment represented on the shopper avatar is configurable according to a set of styling options based on a type of clothing (inventory of wearable articles may be interrogated for various data--e.g., a tally of the number of clothing items of a particular color, category, style, and/or brand, para. 0037; determined whether the wearable article meets selected match criteria. Thus, the wearable article referred to hereinabove may be a wearable first article; the selection engine may be configured to select the first article based on one or more of color, style, and/or brand matching with respect to a wearable second article, para. 0042.). Regarding claim 25, Kapur et al. discloses the method of claim 21, the method further comprising: receiving a selection of an accessory available on an online retailer e-commerce property (metrical data for the selected article may be adapted based on the underlying topology of the avatar, para. 0047); configuring the shopper avatar to render the accessory based on the physique of the user (virtualized forms of a first, second, third wearable article, etc., may be attached to the avatar, para. 0047); and providing the configured shopper avatar with the accessory for display to the user in the user interface (a plurality of wearable articles are to be reviewed simultaneously, virtualized forms of a first, second, third wearable article, etc., may be attached to the avatar, para. 0047). Regarding claim 26, Kapur et al. discloses the method of claim 21 further comprising: receiving a selection of a second garment (first, second, third wearable article) available on an online retailer e-commerce property (metrical data for the selected article may be adapted based on the underlying topology of the avatar, para. 0047); configuring the shopper avatar to render the second garment based on the physique of the user (virtualized forms of a first, second, third wearable article, etc., may be attached to the avatar, para. 0047); and providing the configured shopper avatar with the second garment for display to the user in the user interface (a plurality of wearable articles are to be reviewed simultaneously, virtualized forms of a first, second, third wearable article, etc., may be attached to the avatar, para. 0047). Regarding claim 27, Kapur et al. discloses the method of claim 26, wherein the new garment is available on a third-party online retailer different from the online retailer e-commerce property where the second garment is available (the inventory may include articles for sale to the user--e.g., the inventory of one vendor or the combined inventories of a plurality of vendors, para. 0037). Claims 28-40 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Higgins et al. (US 20180137515 A1). Regarding claim 28, Higgins et al. discloses a method of enhancing an e-commerce retail experience with on-demand inventory, the method comprising: receiving a styled configuration of a user-selected garment from a user, the styled configuration based on one or more user interactions with a virtual try-on system (TV screen may act as a mirror for the user. The system may then superimpose a representation of a clothing item onto the image of the user and show the representation superimposed on the image on the TV screen. Using one or more gestures, the user may browse, para.0037); determining a plurality of recommended fashion items based on the styled configuration of the user-selected garment and available inventory of the plurality of recommended fashion items from at least one online retailer (virtual clothing items or virtual outfits (e.g., a representation of a set of garments and accessories that may be worn together) that the system may suggest (e.g., by displaying them on the TV screen) or that may be found in an electronic marketplace catalogue, para. 0037); assembling a plurality of outfit ensembles based on the plurality of recommended fashion items and the received styled configuration of the user-selected garment to complete a look (virtual clothing items or virtual outfits (e.g., a representation of a set of garments and accessories that may be worn together) that the system may suggest (e.g., by displaying them on the TV screen) or that may be found in an electronic marketplace catalogue, para. 0037); and generating a user interface to cause display of the plurality of outfit ensembles to the user (para.0037). Regarding claim 29, Higgins et al. discloses the method of claim 28, the method further comprising: retrieving an image of the styled configuration of the user-selected garment overlaid and rendered on a target model based on a user uploaded photo from a database (paras. 0040, 0042, 0051); and providing the image of the styled configuration of the user-selected garment overlaid and rendered on the target model based on the user uploaded photo in the user interface (paras. 0040, 0042, 0051). Regarding claim 30, Higgins et al. discloses the method of claim 28, wherein the one or more user interactions with the virtual try-on system comprise: selecting a styled garment from a catalog of the at least one online retailer (para. 0047); styling the user-selected garment based on a user selection of one or more styling options associated with the user-selected garment (skeletal tracking may be used to proportionally increase the texture size in the display of the virtual clothing items the user virtually tries on such that the virtual clothing texture is appropriately sized and properly tracks the user (e.g., when the user moves in the 3D space), para. 0100); and receiving one or more styling recommendations associated with the user-selected garment based on other outfits styled by the user (based on the determined user's body measurements, the system may recommend to the user only virtual clothing items that fit the user's body measurements. Not having to consider virtual clothing items that do not fit may save the user time while shopping on the electronic marketplace. In certain example embodiments, the system provides the user with a choice for changing the size of the virtual recommended item, para. 0041). Regarding claim 31, Higgins et al. discloses the method of claim 28, wherein determining a plurality of recommended fashion items based on the styled configuration of the user-selected garment and available inventory of the plurality of recommended fashion items from at least one online retailer comprises: retrieving an image of a recommended garment overlaid and rendered on a target model based on a user uploaded photo from a database (camera of a mobile device (e.g., an iPhone or Android phone) may be used to capture an image of the user and to show the captured image of the user on a TV screen. The mobile device may be mounted on a TV set and may be in communication with the TV set to transmit the captured image of the user from the mobile device to the TV set. Thus, the captured image of the user may be communicated to the TV screen and displayed, para. 0037); and providing the image of the recommended garment overlaid and rendered on the target model based on the user uploaded photo in the user interface (paras. 0040, 0042, 0051). Regarding claim 32, Higgins et al. discloses the method of claim 28, wherein determining a plurality of recommended fashion items based on the styled configuration of the user-selected garment and available inventory of the plurality of recommended fashion items from at least one online retailer (electronic marketplace operator may recommend to the user a virtual clothing item or a virtual outfit to virtually try on, para. 0041) comprises: selecting a subset of a large dataset of images of available garments from a database, the subset selected based on a clothing type of the user-selected garment, wherein the available garments in the subset are of a different clothing type than the clothing type of the user-selected garment (system may present to the user a virtual clothing item that complements another virtual clothing item the user is already virtually trying on, para. 0054); ranking the subset of a large dataset of images of available garments from a database based on user-generated metadata (system may determine the user's affinity for a type of item (e.g., skirts, dresses, shoes, or jewelry) or characteristic of an item (e.g., color, pattern, or texture) based on tracking the amount of time the user spends examining a representation of an item displayed to the user, para. 0093); and determining the plurality of recommended fashion items based on the ranking (recommend certain items based on the determined affinity of the user for particular items, para. 0093). Regarding claim 33, Higgins et al. discloses the method of claim 32, the method further comprising: retrieving availability of the plurality of recommended fashion items from the available inventory of the at least one online retailer based on a physique of the user (system may recommend to the user only virtual clothing items that fit the user's body measurements, para. 0041); and determining the plurality of recommended fashion items based on the retrieved availability (system may pre-filter the clothing items whose representations should be presented to the user such that only representations of clothing items matching the user's body measurements may be presented to the user, para. 0052). Regarding claim 34, Higgins et al. discloses the method of claim 28, wherein generating a user interface to cause display of the plurality of outfit ensembles to the user comprises: for each fashion item in each outfit ensemble: retrieving a link to the fashion item from a catalog of an online retailer, the link enabling the user to purchase the fashion item from the online retailer (an action on an electronic marketplace is the system initiating a purchase order using the electronic marketplace, on behalf of the user, in response to receiving a gesture from the user, the gesture representing a command to buy an item, para. 0034); causing a rendering of the fashion item onto a target model based on a physique of the user (may generate (e.g., build) an avatar of the user using the knowledge of the user's body measurements, para. 0039); configuring the rendering of the fashion item based on a set of styling options (the user may employ a particular gesture to change the color, pattern, or texture of a representation of the clothing item, para. 0043); and providing the configured rendering of the fashion item onto the target model and the link to the fashion item in the catalog of the online retailer in the user interface for display to the user on a user device (displaying the 3D representation of clothing overlaid on the 3D avatar, para. 0040). Regarding claim 35, Higgins et al. discloses a method of generating a virtual closet of available digital fashion items, the method comprising: retrieving a shopper avatar to render in a user interface according to a shopper likeness, the shopper avatar previously configured by a user of a virtual try-on system (dimensions of the user's body, determined from the captured spatial data, may be used (e.g., by a machine of the system) to create an avatar (e.g., of the user) that may have a shape that resembles the shape of the user's body, para. 0038); receiving data representing a plurality of garments from a database associated with the virtual try-on system, the data including a stored outfit comprising at least one garment from the plurality of garments (when the user is in the virtual dressing room (e.g., when the user interacts with the system described herein), in addition to virtually trying on virtual items of clothing, the user may add a virtual outfit to his or her closet, para. 0045); providing the shopper avatar wearing the stored outfit in the user interface in the virtual try-on system (display the 3D representation of clothing overlaid on the 3D avatar, para. 0040); generating a user interface element in the user interface, the user interface element comprising a visual representation of a select garment of the plurality of garments (displaying a representation of a clothing item mapped to a representation of the user, para. 0036); receiving a user selection of the user interface element (the user may employ a particular gesture to change the color, pattern, or texture of a representation of the clothing item the user is virtually trying on, para. 0043); and rendering the select garment on the shopper avatar for display in the user interface (the system maps a 3D virtual clothing item onto the user's 3D avatar, para. 0042). Regarding claim 36, Higgins et al. discloses the method of claim 35, further comprising: generating the user interface in a web browser, the user interface overlaid on an online retailer e-commerce website (displaying (e.g., on a TV screen, computer screen, or a mobile device screen) a representation of a clothing item (hereinafter, also “virtual clothing” or “item of virtual clothing”) mapped to a representation of the user, thus allowing the user to virtually try on the virtual clothing item in a virtual dressing room, para. 0034); and providing a transactional user interface element in the user interface, the transactional user interface element enabling the user to purchase the select garment from the online retailer e-commerce website (an action on an electronic marketplace is the system initiating a purchase order using the electronic marketplace, on behalf of the user, in response to receiving a gesture from the user, the gesture representing a command to buy an item, para. 0034). Regarding claim 37, Higgins et al. discloses the method of claim 35, further comprising: generating the user interface on a mobile device, the user interface overlaid on an online retailer e-commerce application (initiating a purchase order using the electronic marketplace, on behalf of the user, in response to receiving a gesture from the user, the gesture representing a command to buy an item, para. 0034); and providing a transactional user interface element in the user interface, the transactional user interface element enabling the user to purchase the select garment from the online retailer e-commerce application (initiating a purchase order using the electronic marketplace, on behalf of the user, in response to receiving a gesture from the user, the gesture representing a command to buy an item, para. 0034; the user may also employ a gesture or voice command to save a reference to an item within a shopping cart or purchase the item using the electronic marketplace, para. 0037). Regarding claim 38, Higgins et al. discloses the method of claim 35, wherein the plurality of garments includes one or more available garments for purchase from a third-party online retailer (seller may display the item within the depth sensor's field of spatial data capture so that the depth sensor may capture the dimensions of that item, para. 0052): retrieving a set of images associated with the one or more available garments from the third-party online retailer (seller may display the item within the depth sensor's field of spatial data capture so that the depth sensor may capture the dimensions of that item, para. 0052) providing the set of images of the one or more available garments from the third-party online retailer for display in the user interface element in the user interface to the user (system may determine the user's affinity for a type of item (e.g., skirts, dresses, shoes, or jewelry) or characteristic of an item (e.g., color, pattern, or texture) based on tracking the amount of time the user may spend examining a representation of an item displayed to the user, para. 0053); receiving a selection of an image of an available garment from the set of images (user may be presented with an option to buy 3.1 the outfit or item (e.g., presented or selected), find out the price 3.2 of the outfit or item, add the virtual outfit or virtual clothing item to the user's virtual closet 3.3, or select the highlighted item 3.4, paras. 0050, 0072); and providing a transactional user interface element in the user interface, the transactional user interface element enabling the user to purchase the available garment from the third-party online retailer (system may receive a record (e.g., an inventory entry or a description) of an item to be used in the creation of a catalogue of items. Such a record may include an image of the item, para. 0043). Regarding claim 39, Higgins et al. discloses the method of claim 35, the method further comprising: receiving a link to a catalog of an online retailer, the link enabling the user to purchase one or more available garments from the online retailer (seller may use a network-based system to present the item to the user of the network-based system (e.g., a potential buyer of the item). Examples of network-based systems include commerce systems (e.g., shopping websites), publication systems (e.g., classified advertisement websites), listing systems (e.g., auction websites), and transaction systems (e.g., payment websites). The item may be presented within a document (e.g., a webpage) that describes the item or product, para. 0060); retrieving data representing the one or more available garments from the online retailer (seller may use a network-based system to present the item to the user of the network-based system (e.g., a potential buyer of the item). Examples of network-based systems include commerce systems (e.g., shopping websites), publication systems (e.g., classified advertisement websites), listing systems (e.g., auction websites), and transaction systems (e.g., payment websites). The item may be presented within a document (e.g., a webpage) that describes the item or product, para. 0060), the data including a set of images of the one or more available garments, available sizes in stock inventory, and pricing information (system may also determine the user's one or more sizes (e.g., the user's size may be different for different brands of clothing) based on the acquired spatial data. While the system loads outfits in the user's size, the system may communicate to the user a status update, para. 0070); configuring the rendering of each available garment based on the shopper avatar (captured dimensions and determined size may be provided to a buyer (e.g., by a machine) to help a buyer ascertain the fit of an item, para. 0052); providing the set of images associated with the one or more available garments in the user interface element in the user interface (system may receive a record (e.g., an inventory entry or a description) of an item to be used in the creation of a catalogue of items. Such a record may include an image of the item, para. 0043); in response to a selection of a select available garment in the user interface, providing the configured rendering of the select available garment onto the shopper avatar (system may use an avatar of the user on which the selected virtual outfit is overlaid, para. 0090); and storing the configured rendering of the select available garment onto the shopper avatar and the link to the select available garment in the catalog of the online retailer in the database associated with the virtual try-on system (mix and match a recommended virtual clothing item with virtual outfits or single items saved in the user's virtual closet, buy, sell, or bid for a clothing item on the electronic marketplace, or hold a social network virtual fashion show by presenting the virtual outfit on a virtual runway. The system may provide the user the choice to create a virtual closet to store virtual clothing items, para. 0045). Regarding claim 40, Higgins et al. discloses the method of claim 35, wherein the data representing the plurality of garments from the database associated with the virtual try-on system comprises historical purchase data from one or more retailers, the method further comprising: retrieving a set of images associated with one or more previously purchased garments from the one or more retailers (seller may use a network-based system to present the item to the user of the network-based system (e.g., a potential buyer of the item). Examples of network-based systems include commerce systems (e.g., shopping websites), publication systems (e.g., classified advertisement websites), listing systems (e.g., auction websites), and transaction systems (e.g., payment websites). The item may be presented within a document (e.g., a webpage) that describes the item or product, para. 0060); providing the set of images of the one or more previously purchased garments from the one or more retailers for display in the user interface element in the user interface to the user (system may receive a record (e.g., an inventory entry or a description) of an item to be used in the creation of a catalogue of items. Such a record may include an image of the item, para. 0043); receiving a selection of an image of a previously purchased garment from the set of images (user may look at a recommended virtual clothing item or virtual outfit, or may browse through a number of suggested virtual outfits 110 or virtual clothing items (e.g., based on deals, stylists, or popularity), para. 0063); and providing the image of the previously purchased garment in the user interface element in the user interface (system may receive a record (e.g., an inventory entry or a description) of an item to be used in the creation of a catalogue of items. Such a record may include an image of the item, para. 0043). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to THOMAS J LETT whose telephone number is (571)272-7464. The examiner can normally be reached Mon-Fri 9-6 ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Tammy Goddard can be reached at (571) 272-7773. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /THOMAS J LETT/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Oct 10, 2023
Application Filed
May 09, 2024
Response after Non-Final Action
Jul 10, 2025
Response after Non-Final Action
Aug 18, 2025
Request for Continued Examination
Oct 31, 2025
Response after Non-Final Action
Nov 29, 2025
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602714
LIGHTING AND INTERNET OF THINGS DESIGN USING AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12570401
Robot and Unmanned Aerial Vehicle (UAV) Systems for Cell Sites and Towers
2y 5m to grant Granted Mar 10, 2026
Patent 12567217
SMART CONTENT RENDERING ON AUGMENTED REALITY SYSTEMS, METHODS, AND DEVICES
2y 5m to grant Granted Mar 03, 2026
Patent 12561867
SYSTEMS AND METHODS FOR AUTOMATICALLY ADDING TEXT CONTENT TO GENERATED IMAGES
2y 5m to grant Granted Feb 24, 2026
Patent 12555276
Image Generation Method and Apparatus
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
83%
Grant Probability
47%
With Interview (-36.0%)
2y 8m
Median Time to Grant
Low
PTA Risk
Based on 719 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month