Prosecution Insights
Last updated: April 19, 2026
Application No. 17/839,069

METHODS AND SYSTEMS FOR USING AVATARS AND A SIMULATED CITYSCAPE TO PROVIDE VIRTUAL MARKETPLACE FUNCTIONS

Final Rejection §103
Filed
Jun 13, 2022
Examiner
FRUNZI, VICTORIA E.
Art Unit
3689
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Tag Multimedia LLC
OA Round
4 (Final)
24%
Grant Probability
At Risk
5-6
OA Rounds
4y 3m
To Grant
48%
With Interview

Examiner Intelligence

Grants only 24% of cases
24%
Career Allow Rate
68 granted / 284 resolved
-28.1% vs TC avg
Strong +24% interview lift
Without
With
+23.8%
Interview Lift
resolved cases with interview
Typical timeline
4y 3m
Avg Prosecution
50 currently pending
Career history
334
Total Applications
across all art units

Statute-Specific Performance

§101
35.9%
-4.1% vs TC avg
§103
38.3%
-1.7% vs TC avg
§102
10.7%
-29.3% vs TC avg
§112
10.9%
-29.1% vs TC avg
Black line = Tech Center average estimate • Based on career data from 284 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . The following is a Final Office Action in response to communications received on 3/5/2026. Claims 1, 3-7, 9-14, and 16-20 are currently pending and have been examined. Claims 1, 5, 7, 12, 14, and 18 have been amended. Claims 2, 8, and 15 are cancelled. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 3-7, 9-14, 16-20 are rejected under 35 U.S.C. 103 as being unpatentable over Rathod (US PG PUB 20180350144) in view of Emma et al. (US PG PUB 20190156222) and in view of Ghazanfari (US PG PUB 20190369742) in further view of Perkins (US 20080162262). Regarding claims 1, 7, and 14, Rathod discloses: A method for executing a virtual marketplace platform comprising: (claim 1) A tangible, non-transitory computer-readable medium storing instructions that, when executed, cause a processing device to: (claim 7) (Figure 1) A system comprising: a memory device storing instructions; a processing device communicatively coupled to the memory device, wherein the processing device executes the instructions to: (claim 14) (Figure 1) receiving one or more images of a physical location, wherein the one or more images comprise representations of one or more buildings associated with one or more entities present in the physical location; (Rathod, para 0110), “real world updated map or street view imagery and 360-degree views with street view technology to provides panoramic and outdoor views from positions along many streets in the world and displays panoramas of stitched images, indoor views of businesses and go inside with indoors maps to access real world indoor map and 360-degree views or 360-degree virtual tour of real world including building, mall, shop, stadium, transit and floor of building and associated objects, products, persons, accessories, and items of real world based on street view technology and indoor maps technology”. generating a virtual cityscape by modeling the one or more buildings associated with the one or more entities, wherein the virtual cityscape comprises a virtual map configured to be navigated by one or more virtual avatars, virtual vehicles, or both; (Rathod, para 0422), “the player or avatar of player 3601 also continuously moves about in a range of coordinates in the real world map or virtual world”. generating the one or more virtual avatars and including the one or more virtual avatars in the virtual cityscape, wherein: (Rathod, para 0422), “the player or avatar of player 3601 also continuously moves about in a range of coordinates in the real world map or virtual world”. at least one of the virtual avatars comprises an entity virtual avatar associated with an entity occupying a virtual building in the virtual cityscape, and the entity virtual avatar is included inside the virtual building associated with the entity, (Rathod, para 325), “virtual world virtual representation 1833 and/or associate one or more avatars (e.g. seller or staff) 1833 of said particular restraint 1832. If virtual representation 1833 of said particular restaurant of real world and/or associate avatar 1833 not available then generating, creating, and adding, by the server module 188, said virtual representation 1832 and/or associate avatar of seller or staff 1833 of said particular restaurant of real world in the virtual world 1850”. And see Abstract at least one of the virtual avatars comprises a customer virtual avatar associated with a customer, (Rathod, para 0309), “ connected real world objects including products and services and based on that automatically relating or connecting user or virtual avatar or account or profile or virtual representation of user with said interacted or connected or related one or more types of entities or with virtual representation or account or profile of said one or more types of entities in virtual world. For example if user is customer of particular shop in real world then user is also connected with said particular virtual shop in virtual world.) and the customer virtual avatar is enabled to traverse the virtual cityscape by moving throughout one or more virtual streets, entering and exiting virtual buildings associated with entities, or some combination thereof; (Rathod, para 0126), “ avatar of visitor and staffs and cooks of restaurants and based on said data generating, recording, simulating and displaying that avatar associated with the user and one or more avatars of accompanied users or persons or contacts who currently visiting particular real world restaurant conducting one or more types of activities including enter in to restaurant, waiting or waiting in queue, sitting at particular seat inside restaurant (wherein identify particular seat inside restaurant based associated beacon, wherein receiving from code from closest or nearest beacon based on received push message from each visitor's mobile device via Bluetooth to identify particular seat where user seats), conversing with accompanied users, doing particular type of work like using laptop, viewing products in showcase or display, reading displayed contents or viewing arts inside restaurant, interacting or talking with waiter or service provider or instructing or ordering particular one or more food items or menu items including soup, starters, main course, deserts including ice cream or order to parcel one or more types of food items or menu items, serving said ordered one or more food items, eating of said one or more food items including show that user is drinking water, soup, juice, tea and coffee, eating starters and main course, eating ice cream and deserts, receiving and viewing bill and providing card or cash and making payment for said ordered one or more food items, using particular type of stencils, cups an crookeries while eating or drinking during breakfast, lunch and dinner, listening particular music while inside restaurant, wash hands with finger bowls, in case of buffet taking particular food items and eat in standing position, providing one or more types of reactions expressions including ratings, reviews, and likes and exit from the restaurant.”. receiving one or more images of an interior configuration of a real building associated with the virtual building; and (Rathod, para 0110) “indoor views of businesses and go inside with indoors maps to access real world indoor map and 360-degree views or 360-degree virtual tour of real world including building, mall, shop, stadium, transit and floor of building and associated objects, products, persons, accessories, and items of real world based on street view technology and indoor maps technology”. generating a virtual interior configuration that is similar to the interior configuration depicted in the one or more images and using the virtual interior configuration to populate an interior of the virtual building with one or more goods, products, furniture, objects, decorations, flooring, walls, ceilings, doors, windows, or some combination thereof. (Rathod, para 0110) “indoor views of businesses and go inside with indoors maps to access real world indoor map and 360-degree views or 360-degree virtual tour of real world including building, mall, shop, stadium, transit and floor of building and associated objects, products, persons, accessories, and items of real world based on street view technology and indoor maps technology”. Rathod teaches a user avatar that moves about in a virtual environment (Rathod, para 0430), that is created using a user profile (Rathod, para 0071), but does not expressly disclose: wherein an owner or representative associated with the entity registered with the virtual marketplace by uploading an image of the owner or representative and a voice associated of the owner or representative, using an artificial intelligence engine that is trained to provide responses via the entity virtual avatar, wherein the entity virtual avatar is configured based on the image of the owner or representative and using the voice of the owner or representative, and wherein the artificial intelligence engine is trained to receive input from the customer virtual avatar and determine an output to respond, via the entity virtual avatar using the voice, to the input, wherein the input comprises a question and the output comprises an answer; presenting the answer on a user interface of a computing device of the customer; adjusting at least one aspect of the at least one virtual building associated with the entity virtual avatar based on a plurality of actions taken by at least one of the customer virtual avatar and at least one other customer virtual avatar in the at least one virtual building associated with the entity virtual avatar, wherein the plurality of actions are tracked over a defined period; However Emma teaches: wherein an owner or representative associated with the entity registered with the virtual marketplace by uploading an image of the owner or representative and a voice associated of the owner or representative, (Emma, para 0074), “Character profiles have applications to both virtual and augmented reality, wherein based on characteristics input by a user, a virtual or augmented avatar or sprite is displayed embodying the personality and appearance of profile characteristics. The AI system is further operable to determine appearance characteristics based on stored profile elements. For example, if a personality profile characteristic includes “strong,” the AI is operable to retrieve relevant muscle size and appearance from an internal or external memory and display the corresponding appearance in a virtual or augmented reality avatar or sprite. In a further embodiment, a virtual or augmented reality avatar or sprite is created by the AI system such that the avatar or sprite mimics the full appearance and personality traits stored in a user's profile and serves as a “digital clone” of a user”. (Emma, para 0052), “As a user continues to interact with the system and more raw data is collected, the AI system continues to develop the user profile. As the profile is developed, the AI is operable to analyze and record how users act in response to audial and visual stimuli from the system in conversations, responses, or questions. In one embodiment, elements of the extracted data are stored in a new personality profile. The AI is then operable to compare and match to the new personality profile such that responses match or are similar to the tone, manner, or phrasing of the new personality profile. In further embodiments, extracted audio and text is replayed in response to stimuli from the user. In this way, a personality is operable to be captured and stored in the system and the AI is operable to take on the persona of the captured user. Responses thus include direct quotations, audio clips, and ideas from the user profile as well as a tone, manner, and or phrasing similar to that of the user profile. In some embodiments, extracted audio is analyzed for common phonics, tones, and speech patterns, and these phonics, tones, and speech patterns are extracted and replayed in a manner that allows for construction of words and phrases that were not directly recorded in the user interaction.” using an artificial intelligence engine that is trained to provide responses via the entity virtual avatar, wherein the entity virtual avatar is configured based on the image of the owner or representative and using the voice of the owner or representative, and wherein the artificial intelligence engine is trained to receive input from the customer virtual avatar and determine an output to respond, via the entity virtual avatar using the voice, to the input, (Emma, para 0052 and 0074) Therefore it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include in Rathod the limitations above, as is taught by Emma, since this will add more personalization of the avatar by making the avatar look and sound like the user. Rathod in view of Emma does not disclose: wherein the input comprises a question and the output comprises an answer; presenting the answer on a user interface of a computing device of the customer; adjusting at least one aspect of the at least one virtual building associated with the entity virtual avatar based on a plurality of actions taken by at least one of the customer virtual avatar and at least one other customer virtual avatar in the at least one virtual building associated with the entity virtual avatar, wherein the plurality of actions are tracked over a defined period; However Ghazanfari teaches generating, via an artificial intelligence engine, one or more machine learning models trained to receive input from the customer virtual avatar and determine an output to respond to the input, wherein the input comprises a question and the output comprises an answer. (Ghazanfari, (Para 0044), “the virtual environment can be interactive. More specifically, an artificial intelligence (AI) character, as shown in FIG. 7, can be created in the virtual environment to interact with the user. For example, an AI sales person can be embedded in a virtual shopping destination (e.g., virtual car dealership). Using various AI technologies (e.g., CNN, speech recognition, etc.), the AI character can have a meaningful conversation with the user (e.g., answering questions or introducing products). In addition to chatting, the AI character may also perform actions in response to the user's actions or commands, thus providing the user with a shopping experience similar to the one in the physical world. For example, a car salesman may “follow” the user around as the user “walks” in the virtual car dealership; or a clothing model may turn or move (e.g., raising an arm) in response to the user's verbal request. In some embodiments, the sales person that appears as a holographic object with the virtual environment, may be controlled remotely by a live person, such that the user communicates through the holographic character with a live sales person at a remote location”. presenting the answer on a user interface of a computing device of the customer; (Ghazanfari, para 0049), “ the user can make a purchase by clicking a displayed shopping cart symbol, which can provide a link to a third-party shopping system. More specifically, the third-party shopping system can maintain a shopping cart that tracks the user's purchases from multiple virtual shopping destinations (e.g., from multiple apparel stores). This way, the system does not need to track the user's purchases across the multiple shopping destinations, but hands off this responsibility to the third-party shopping system. In some embodiments, each time a user places an item in the shopping cart, a particular link to the selected item can be sent to the third-party shopping system, allowing the shopping system to obtain detailed information (e.g., item description and price) associated with the selected item. For example, a user may select a shirt from a first virtual store and a pair of pants from a second virtual store. Instead of having to pay at each store separately, the user can do a final checkout at a single location. In addition to providing convenience to the user, this also allows for scaling up of the system. The system can add more virtual shopping destinations (e.g., virtual stores) without the need for establishing additional payment systems”. Therefore it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include in the combination of Rathod and Emma, generating, via an artificial intelligence engine, one or more machine learning models trained to receive input from the customer virtual avatar and determine an output to respond to the input, wherein the input comprises a question and the output comprises an answer; presenting the answer on a user interface of a computing device of the customer, as is taught by Ghazanfari since the AI character can have a meaningful conversation with the user (e.g., answering questions or introducing products). In addition to chatting, the AI character may also perform actions in response to the user's actions or commands, thus providing the user with a shopping experience similar to the one in the physical world”. (Ghazanfari, para 0044). Rathod in view of Emma in view of Ghazanfari does not disclose: adjusting at least one aspect of the at least one virtual building associated with the entity virtual avatar based on a plurality of actions taken by at least one of the customer virtual avatar and at least one other customer virtual avatar in the at least one virtual building associated with the entity virtual avatar, wherein the plurality of actions are tracked over a defined period; However Perkins teaches: adjusting at least one aspect of the at least one virtual building associated with the entity virtual avatar based on a plurality of actions taken by at least one of the customer virtual avatar and at least one other customer virtual avatar in the at least one virtual building associated with the entity virtual avatar, wherein the plurality of actions are tracked over a defined period; [0044] Further, a retail customer could not only observe a 3-D depiction of a proposed new planogram for an aisle in the retail seller's store, but could also be provided with visualized data from actual consumer studies showing hot spots on the shelf, customer purchase patterns, etc., or could observe interactions of live customers who are networked to the immersive visualization center and observing the shelf space in their own virtual reality environment. Retailers could observe the purchase rate, the pickup rate, consumer eye response and visual attractors, etc., in real time as changes are made to packaging, shelf layout, planograms, etc. Observation gallery 218 may allow these (or other relevant) parties 216 to observe and/or interact with a participant 214 and/or the virtual or physical environment provided by immersive visualization center 200. Therefore it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Rathod in view of Emma in view of Ghazanfari to include adjusting at least one aspect of the at least one virtual building associated with the entity virtual avatar based on a plurality of actions taken by at least one of the customer virtual avatar and at least one other customer virtual avatar in the at least one virtual building associated with the entity virtual avatar, wherein the plurality of actions are tracked over a defined period, as taught in Perkins, in order to make real time changes based on customer actions (paragraph 0044). Regarding claims 3, 10 and 16, the combination of Rathod, Emma, Ghazanfari in further view of Perkins teaches the limitations set forth above. Rathod in view of Emma does not disclose: wherein the question is associated with a product or service offered by the entity and the answer is associated with the product or the service. However Ghazandari teaches wherein the question is associated with a product or service offered by the entity and the answer is associated with the product or the service. (Ghazanfari, para 0044), “an AI sales person can be embedded in a virtual shopping destination (e.g., virtual car dealership). Using various AI technologies (e.g., CNN, speech recognition, etc.), the AI character can have a meaningful conversation with the user (e.g., answering questions or introducing products)”. Therefore it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include in the combination of Rathod and Emma, wherein the question is associated with a product or service offered by the entity and the answer is associated with the product or the service, as is taught by Ghazanfari since the AI character can have a meaningful conversation with the user (e.g., answering questions or introducing products). In addition to chatting, the AI character may also perform actions in response to the user's actions or commands, thus providing the user with a shopping experience similar to the one in the physical world”. (Ghazanfari, para 0044). Regarding claims 4, 11 and 17, the combination of Rathod, Emma, Ghazanfari in further view of Perkins teaches the limitations set forth above. Rathod further discloses: enabling two or more customer virtual avatars to communicate with each other (Rathod, para 0111), “ talking or talking with one or more accompanied users”. after each of the users associated with the two or more customer virtual avatars provides consent. (Rathod, para 0108), “ limit viewing of or sharing of said generated simulation of user's real world life or user's real world life activities to selected one or more contacts, followers, all or one or more criteria or filters specific users of network or make it as private”. Regarding claim 5, 12 and 18, the combination of Rathod, Emma, Ghazanfari in further view of Perkins teaches the limitations set forth above. Rathod in view of Emma does not disclose: receiving a selection to add the product or the service to a virtual shopping cart provided by the virtual marketplace platform. However Ghazanfari teaches: receiving a selection to add the product or the service to a virtual shopping cart provided by the virtual marketplace platform. (Ghazanfari, para 0049), “ the user can make a purchase by clicking a displayed shopping cart symbol, which can provide a link to a third-party shopping system. More specifically, the third-party shopping system can maintain a shopping cart that tracks the user's purchases from multiple virtual shopping destinations (e.g., from multiple apparel stores). This way, the system does not need to track the user's purchases across the multiple shopping destinations, but hands off this responsibility to the third-party shopping system. In some embodiments, each time a user places an item in the shopping cart, a particular link to the selected item can be sent to the third-party shopping system, allowing the shopping system to obtain detailed information (e.g., item description and price) associated with the selected item. For example, a user may select a shirt from a first virtual store and a pair of pants from a second virtual store. Instead of having to pay at each store separately, the user can do a final checkout at a single location. In addition to providing convenience to the user, this also allows for scaling up of the system. The system can add more virtual shopping destinations (e.g., virtual stores) without the need for establishing additional payment systems”. Therefore it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include in the combination of Rathod and Emma, receiving a selection to add the product or the service to a virtual shopping cart provided by the virtual marketplace platform, as is taught by Ghazanfari since the AI character can have a meaningful conversation with the user (e.g., answering questions or introducing products). In addition to chatting, the AI character may also perform actions in response to the user's actions or commands, thus providing the user with a shopping experience similar to the one in the physical world”. (Ghazanfari, para 0044). Regarding claim 6, 13 and 19, the combination of Rathod, Emma, Ghazanfari in further view of Perkins teaches the limitations set forth above. Rathod further discloses providing a backend tool for the entity to enable the entity to create social media posts that are configured to be shared to a plurality of social media platforms simultaneously. (Rathod, FIG 42, item 4251), shared post shared and commented by millions on social media. Regarding claim 9, the combination of Rathod, Emma, Ghazanfari in further view of Perkins teaches the limitations set forth above. Rathod further discloses: wherein the one or more virtual avatars are accessible via an application programming interface associated with another system and the one or more virtual avatars are implemented in the another system. (Rathod, para 0249), “Third party or external entities including advertisers, sellers, sponsors, vendors, shops, users may, in one example embodiment, create virtual objects 466/445 for displaying for user”. Regarding claim 20, the combination of Rathod, Emma, Ghazanfari in further view of Perkins teaches the limitations set forth above. Rathod in view of Emma does not disclose: wherein the processing device transmit information pertaining to interactions associated with the customer virtual avatar to a customer relationship management platform via an application programming interface. However Ghazanfari teaches: wherein the processing device transmit information pertaining to interactions associated with the customer virtual avatar to a customer relationship management platform via an application programming interface. (Ghazanfari, Para 0052) “ the backend of the system may also provide the virtual store designer or the merchant with an interactive interface that allows the merchant to access customers of the virtual store in real time. In one embodiment, the merchant may be able to send customized push notifications (e.g., coupons or promotion sales) to customers. Such push notifications can sometimes bypass the virtual shopping system, given that they abide a number of basic rules (e.g., timing). Moreover, a merchant can also perform market testing (e.g., A/B testing) using the virtual shopping system. The merchant can present testing samples to customers in its virtual store, and the virtual system can provide analytics (e.g., duration of gaze, facial expression, etc.) back to the merchant in response to the testing samples”. Therefore it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to include in the combination of Rathod and Emma, wherein the processing device transmit information pertaining to interactions associated with the customer virtual avatar to a customer relationship management platform via an application programming interface, as is taught by Ghazanfari since the AI character can have a meaningful conversation with the user (e.g., answering questions or introducing products). In addition to chatting, the AI character may also perform actions in response to the user's actions or commands, thus providing the user with a shopping experience similar to the one in the physical world”. (Ghazanfari, para 0044). Response to Arguments Applicant’s arguments, filed 3/5/2026 with respect to 35 USC 112(a) have been fully considered and are persuasive. The rejection has been withdrawn in view of the claim amendments. Applicant's arguments filed 3/5/2026 have been fully considered but they are not persuasive. The rejection above has been updated the address the claims as amended. With respect to the arguments directed to Perkins not teaching “using an artificial intelligence engine that is trained to provide responses via the entity virtual avatar, wherein the entity virtual avatar is configured based on the image of the owner or representative and using the voice of the owner or representative, and wherein the artificial intelligence engine is trained to receive input from the customer virtual avatar and determine an output to respond, via the entity virtual avatar using the voice, to the input, wherein the input comprises a question and the output comprises an answer; presenting the answer on a user interface of a computing device of the customer; adjusting at least one aspect of the at least one virtual building associated with the entity virtual avatar a plurality of actions taken by at least one of the customer virtual avatar and at least one other customer virtual avatar in the at least one virtual building associated with the entity virtual avatar, wherein the plurality of actions are tracked over a defined period”, the examiner asserts that the rejection does not use Perkins to teach all of the above limitations. Perkins is only relied upon for “adjusting at least one aspect of the at least one virtual building associated with the entity virtual avatar a plurality of actions taken by at least one of the customer virtual avatar and at least one other customer virtual avatar in the at least one virtual building associated with the entity virtual avatar, wherein the plurality of actions are tracked over a defined period”. The examiner maintains that the reference does teach this limitation as amended. Perkins is adjusting an aspect of the building, which is for example the shelf space or arrangement of products. These products have been arranged based on actions or patterns performed by the customer avatars over a period of time (observed patterns). Under the broadest reasonable interpretation, the updating of a planogram of a store within a virtual environment based on avatar customer patterns of behavior is adjusting aspects of the virtual building based on avatar actions that have been tracked over time. As no specific remarks are provided for the additional cited references, the examiner has focused the response here to the remarks directed to Perkins, however the examiner maintains the previously cited references still, as a whole, teach the claims as amended. Relevant Art Not Cited “Configurable Virtual Reality Store with Contextual Interaction Interface” discussed the expansion of 3D models in planning a store layout using configurable VR shopping space to research merchandising and e-commerce. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to VICTORIA E. FRUNZI whose telephone number is (571)270-1031. The examiner can normally be reached Monday- Friday 7-4 (EST). Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Marissa Thein can be reached at (571) 272-6764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. VICTORIA E. FRUNZI Primary Examiner Art Unit TC 3689 /VICTORIA E. FRUNZI/Primary Examiner, Art Unit 3689 3/27/2026
Read full office action

Prosecution Timeline

Jun 13, 2022
Application Filed
May 01, 2024
Non-Final Rejection — §103
Aug 07, 2024
Interview Requested
Nov 06, 2024
Applicant Interview (Telephonic)
Nov 06, 2024
Examiner Interview Summary
Nov 07, 2024
Response Filed
Jan 10, 2025
Final Rejection — §103
May 15, 2025
Response after Non-Final Action
Jun 16, 2025
Request for Continued Examination
Jun 23, 2025
Response after Non-Final Action
Sep 04, 2025
Non-Final Rejection — §103
Mar 05, 2026
Response Filed
Mar 27, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12561733
DYNAMICALLY PRESENTING AUGMENTED REALITY CONTENT GENERATORS BASED ON DOMAINS
2y 5m to grant Granted Feb 24, 2026
Patent 12524795
SINGLE-SELECT PREDICTIVE PLATFORM MODEL
2y 5m to grant Granted Jan 13, 2026
Patent 12518309
SYSTEMS AND METHODS FOR REDUCING PERSONALIZED REAL ESTATE COLLECTION SUGGESTION DELAYS VIA BATCH GENERATION
2y 5m to grant Granted Jan 06, 2026
Patent 12417481
SYSTEMS AND METHODS FOR AUTOMATING CLOTHING TRANSACTION
2y 5m to grant Granted Sep 16, 2025
Patent 11810156
SYSTEMS, METHODS, AND DEVICES FOR COMPONENTIZATION, MODIFICATION, AND MANAGEMENT OF CREATIVE ASSETS FOR DIVERSE ADVERTISING PLATFORM ENVIRONMENTS
2y 5m to grant Granted Nov 07, 2023
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
24%
Grant Probability
48%
With Interview (+23.8%)
4y 3m
Median Time to Grant
High
PTA Risk
Based on 284 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month