Prosecution Insights
Last updated: April 19, 2026
Application No. 17/982,365

Metaverse Content Modality Mapping

Non-Final OA §103
Filed
Nov 07, 2022
Examiner
HSU, JONI
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Niantic, Inc.
OA Round
5 (Non-Final)
87%
Grant Probability
Favorable
5-6
OA Rounds
2y 9m
To Grant
95%
With Interview

Examiner Intelligence

Grants 87% — above average
87%
Career Allow Rate
741 granted / 848 resolved
+25.4% vs TC avg
Moderate +7% lift
Without
With
+7.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
34 currently pending
Career history
882
Total Applications
across all art units

Statute-Specific Performance

§101
8.4%
-31.6% vs TC avg
§103
59.7%
+19.7% vs TC avg
§102
11.4%
-28.6% vs TC avg
§112
3.1%
-36.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 848 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 4, 2026 has been entered. Information Disclosure Statement The information disclosure statement (IDS) submitted on October 6, 2025 was filed after the mailing date of the application on November 7, 2022. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 1, 8, and 15 are objected to because of the following informalities: Claims 1, 8, and 15 each recite “…three-dimensional (3D) representation the virtual environment…using the mapping to update the virtual using…” where Applicant is assumed to have meant “…three-dimensional (3D) representation of the virtual environment…using the mapping to update the virtual environment using…” Appropriate correction is required. Response to Arguments Applicant’s arguments with respect to claim(s) 1-20 have been considered but are moot because new grounds of rejection are made in view of Yan 1 (US 20210110609A1). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim(s) 1, 4, 8, 11, 15, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dama (US 20210377514A1), Schwarz (US 20180342103A1), and Yan 1 (US 20210110609A1). As per Claim 1, Dama teaches a system comprising: one or more processors (computing device 12) [0023]. It would have been obvious to one of ordinary skill in the art that there is logic encoded in one or more non-transitory computer-readable storage media for execution by the one or more processors (12) and when executed causing the system to perform operations in order for the processor to be able to operate. Dama teaches 3D stereoscopy is applied in various fields such as a virtual environment [0003]. Dama teaches re-formatting an incoming 3D video stream into a version compatible with a 2D display while preserving the 3D-type of presentation (Abstract). Thus, it would have been obvious to one of ordinary skill in the art that the 2D display device displays the virtual environment, since the 3D virtual environment is re-formatted into a version compatible with the 2D display while preserving the 3D virtual environment-type of presentation. Dama teaches obtaining functionality developed for a first modality of a virtual environment, wherein the first modality comprises display of a two-dimensional (2D) representation of the virtual environment on a 2D display device (10, Fig. 1), and wherein the functionality comprises a 2D control of the 2D display device for interacting with the virtual environment (2D display 10 that is paired with a user’s computing device 12, intercept 3D video content received by computing device 12, and convert the 3D formatted video into a format compatible with the graphics capabilities of 2D display 10, [0019], [0003]). Dama teaches generating a mapping of the functionality for the second modality of the virtual environment, the second modality comprising display of a three-dimensional (3D) representation the virtual environment on a three-dimensional (3D) display of the client device, wherein the mapping comprises linking the 2D control of the first modality to a 3D control of the second modality; receiving, from the client device, an indication that a user has interacted with the 3D control while viewing the 3D representation of the virtual environment in the second modality; using the mapping to update the virtual using an interaction triggered by the 2D control of the first modality based on the user’s interaction with the 3D control; determining updated frames for the 3D representation of the environment in the second modality based on the update to the virtual environment made using the interaction triggered by the 2D control of the first modality; and providing, to the client device, the updated frames (re-formatting the incoming 3D video stream into a version compatible with 2D display 10 while preserving the 3D-type of presentation, a user equipped with 3D glasses 14 is able to having the desired 3D experience, as long as 3D glasses 14 remain synchronized with the sequence of frames shown on 2D display 10, the user will actually be viewing an interactive 3D video, [0020], computing device 12 receives an incoming multimedia stream from an external source 30, where external source 30 is depicted as a source of software-based learning modules that utilize 3D objects, user communicates with external source 30 via a communication network 40, thus, for situations where a student would like to participate in a distance learning endeavor with external source 30, 2D/3D conversion interface device 20 is used, and simply connected between the user’s computer and display device, by maintaining the bi-directional communication link between the user and external source 30, the user is able to use control commands (entered via a keyboard, smartphone-enabled device, etc.) to manipulate the actual 3D projection, [0021], [0003]). However, Dama does not teach wherein the mapping comprises linking the 2D control of the first modality to a 3D input gesture of the second modality; receiving, from the client device, an indication that a user has made the 3D input gesture while viewing the 3D representation of the virtual environment in the second modality; using the mapping to update the virtual using an interaction triggered by the 2D control of the first modality based on the user’s making of the 3D input gesture while viewing the 3D representation of the virtual environment in the second modality. However, Schwarz teaches wherein the mapping comprises linking the 2D control to a 3D input gesture; receiving, from the client device, an indication that a user has made the 3D input gesture while viewing the 3D representation of the virtual environment in the second modality; using the mapping to update the virtual using an interaction triggered by the 2D control of the first modality based on the user’s making of the 3D input gesture while viewing the 3D representation of the virtual environment in the second modality (virtual content in an augmented-reality scene can be directly interacted with by a user, virtual content 800A is being rendered in the form of a virtual tablet, virtual content may be rendered in a manner such that it looks 3D (globe 330), the virtual content is actually only 2D, to be properly displayed in the augmented-reality scene, it is necessary to translate a user’s 3D input gesture onto a 2D coordinate system in relation to the virtual content 800A, [0086], translated values of the user’s 3D scrolling action 820A onto a 2D coordinate system relative to the virtual content 800A, [0088]). Since Dama teaches linking the 2D control of the first modality to a 3D control of the second modality [0021, 0003], this teaching from Schwarz can be implemented into the device of Dama so that it links the 2D control of the first modality to a 3D input gesture of the second modality. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Dama so that the mapping comprises linking the 2D control of the first modality to a 3D input gesture of the second modality; receiving, from the client device, an indication that a user has made the 3D input gesture while viewing the 3D representation of the virtual environment in the second modality; using the mapping to update the virtual using an interaction triggered by the 2D control of the first modality based on the user’s making of the 3D input gesture while viewing the 3D representation of the virtual environment in the second modality because Schwarz suggests that this way, the user can make 3D gestures to interact with virtual content that is rendered in a manner such that it looks 3D, such as interacting with a globe, so it seems like the user is interacting with a real globe, but the virtual content is actually only 2D on a virtual tablet [0086] in order to leverage a user’s familiarity with an actual tablet to enable that user to more intuitively interact with virtual content [0028]. However, Dama and Schwarz do not teach that the system is a server; establishing a connection with a client device; determining, responsive to establishing the connection, a type of the client device, the type indicating a second modality of a plurality of possible second modalities, the second modality being used by the client device to interact with the virtual environment; loading an interaction mapping library for the second modality, the interaction mapping library mapping the functionality developed for the first modality to the second modality of the virtual environment; activating software modules associated with the second modality; using the software modules to cause the client device to display the virtual environment in the second modality. However, Yan 1 teaches that the system is a server (102); establishing a connection with a client device (104); determining, responsive to establishing the connection, a type of the client device, the type indicating a second modality of a plurality of possible second modalities, the second modality being used by the client device (server 102 may be configured to receive requests from input devices 104 for registering the input devices 104 with a shared interactive environment 114, the shared interactive environment 114 may include a virtual and mixed-reality environment in which devices supporting different types of environments and modalities (associated with VR, AR, PC) are configured to interact, server 104 may be configured to identify the modality supported by the input device 104, [0026]) to interact with the virtual environment (114) (input devices 104 may be configured to provide inputs for controlling the virtual characters within the shared interactive environment 114, [0029]); loading an interaction mapping library for the second modality, the interaction mapping library mapping the functionality developed for the first modality to the second modality of the virtual environment (mobility mapping engine 200 may be configured to access the modality-specific table, mapping or ledger for translating device-specific inputs into standardized inputs, each modality-specific table may include device-specific inputs and standardized inputs corresponding thereto, mobility mapping engine 200 may be configured to provide the standardized inputs to the server 102 for updating the shared virtual environment 114 (controlling a virtual character’s movement in the shared environment 114), [0035]); activating software modules associated with the second modality; using the software modules to cause the client device to display the virtual environment in the second modality (mobility mapping engine 200 may include any software implemented to control movement of a virtual character based on inputs from a user, [0031], [0035]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Dama and Schwarz so that the system is a server; establishing a connection with a client device; determining, responsive to establishing the connection, a type of the client device, the type indicating a second modality of a plurality of possible second modalities, the second modality being used by the client device to interact with the virtual environment; loading an interaction mapping library for the second modality, the interaction mapping library mapping the functionality developed for the first modality to the second modality of the virtual environment; activating software modules associated with the second modality; using the software modules to cause the client device to display the virtual environment in the second modality because Yan 1 suggests that this way, different input devices with different modalities can all interact with a shared interactive environment (Abstract). As per Claim 4, Dama does not teach wherein the 3D input gesture comprises one or more user gestures in a three-dimensional scene associated with the second modality. However, Schwarz teaches wherein the 3D input gesture comprises one or more user gestures in a three-dimensional scene associated with the second modality [0086]. This would be obvious for the reasons given in the rejection for Claim 1. As per Claims 8 and 11, these claims are similar in scope to Claims 1 and 4 respectively, and therefore are rejected under the same rationale. As per Claims 15 and 18, these claims are similar in scope to Claims 1 and 4 respectively, and therefore are rejected under the same rationale. 13. Claim(s) 2, 9, and 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dama (US 20210377514A1), Schwarz (US 20180342103A1), and Yan 1 (US 20210110609A1) in view of Yan 2 (US011195020B1). 14. As per Claim 2, Dama, Schwarz, and Yan 1 are relied upon for the teachings as discussed above relative to Claim 1. Dama teaches wherein the first modality is associated with a first device type, wherein the first device type is a mobile device (smartphone and associated 2D display, [0014]), wherein the second modality is associated with a second device type, and wherein the second device type is one of an augmented reality headset or a virtual reality headset (3D glasses 14) [0027, 0003]. However, Dama, Schwarz, and Yan 1 do not expressly teach that the first modality is associated with augmented reality. However, Yan 2 teaches wherein the first modality is associated with augmented reality of a first device type, wherein the first device type is a mobile device (first device 102 may include an augmented reality device, first device 102 may include a mobile device, col. 4, lines 28-31), wherein the second modality is associated with a second device type, and wherein the second device type is one of an augmented reality headset or a virtual reality headset (second device 104 may be configured for a particular modality, second device may be configured for a modality corresponding to a VR environment, col. 7, lines 35-41; image of a virtual object corresponding to a location of the HMD and a gaze direction of the user can be displayed on the HMD to allow the user to feel as if the user is moving within a space of an artificial reality (a VR space), col. 1, lines 14-20). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Dama, Schwarz, and Yan 1 so that the first modality is associated with augmented reality as suggested by Yan 2. It is well-known in the art that augmented reality enhances natural environments or situations and offers perceptually enriched experiences. 15. As per Claims 9 and 16, these claims are each similar in scope to Claim 2, and therefore are rejected under the same rationale. 16. Claim(s) 3, 10, and 17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dama (US 20210377514A1), Schwarz (US 20180342103A1), and Yan 1 (US 20210110609A1) in view of Gaxiola (US 20110296030A1). 17. As per Claim 3, Dama, Schwarz, and Yan 1 are relied upon for the teachings as discussed above relative to Claim 1. However, Dama, Schwarz, and Yan 1 do not teach wherein the logic when executed is further operable to cause the one or more processors to perform operations comprising disabling software modules not supported by the client device. However, Gaxiola teaches wherein the logic when executed is further operable to cause the one or more processors to perform operations comprising disabling software modules not supported by the client device (if the rendering device does not support any feature in a capability group, the remaining features in the same capability group are automatically disabled for the rendering device, [0010]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Dama, Schwarz, and Yan 1 so that the logic when executed is further operable to cause the one or more processors to perform operations comprising disabling software modules not supported by the client device because Gaxiola suggests that this way, it can be rendered on a variety of devices with different capabilities with a controlled adaptation level by disabling features that are not supported [0020, 0010]. 18. As per Claims 10 and 17, these claims are each similar in scope to Claim 3, and therefore are rejected under the same rationale. 19. Claim(s) 5, 12, and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Dama (US 20210377514A1), Schwarz (US 20180342103A1), and Yan 1 (US 20210110609A1) in view of Parashar (US 20210358294A1) and Dal Mutto (US 20150316996A1). 20. As per Claim 5, Dama, Schwarz, and Yan 1 are relied on for teachings as discussed above relative to Claim 1. However, Dama, Schwarz, and Yan 1 do not teach mapping user interaction with one or more input devices associated with the second modality to one or more elements associated with the first modality. However, Parashar teaches wherein the logic when executed is further operable to cause the one or more processors to perform operations comprising mapping user interaction with one or more input devices associated with the second modality to one or more elements associated with the first modality (HMD device 200 may include an optical sensor system that utilizes outward-facing sensors, the outward-facing sensors detect movements within its field of view, such as gesture-based inputs performed by a user, [0038], remote control device 800 and the physical controlled device 802 may take the form of one or more AR/VR HMD computers, mobile communication devices, and/or other computing device, [0075], remote control device 800 includes a user interface controller 808, a 3D mapping subsystem 812, a remote control engine 814, [0076], user interface controller 808 manages user input within the mixed reality environment provided by the remote control device 800, with assistance from the 3D mapping subsystem 812, the user interface controller 808 can detect interactions of physical objects and virtual objects within the user’s field of view, to detect user activation of a virtual control by a physical object, like a user’s finger, [0078], remote control engine 814 generates a remote control instruction representing the user activation of the virtual control by the user, the remote control instruction is supported by the physical controlled device 802 to perform the user activation operation on the physical controlled device 802, [0080]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Dama, Schwarz, and Yan 1 to include mapping user interaction with one or more input devices associated with the second modality to one or more elements associated with the first modality because Parashar suggests that this allows a user to remotely control a physical controlled device with virtual control elements in a mixed reality environment in a way that is uncomplicated and intuitive for the user [0004, 0075, 0076, 0078, 0080]. However, Dama, Schwarz, Yan 1, and Parashar do not teach mapping the user making of the 3D input gesture with the one or more input devices to one or more two-dimensional user interface elements. However, Dal Mutto teaches mapping the user making of the 3D input gesture with the one or more input devices to one or more two-dimensional user interface elements (allow a user to interact with a program by making gestures in front of an acquisition device of a computing device such as a mobile phone, [0048], in a remapping between three dimensional gestures and a two dimensional user interface, various points in 3D space are mapped direction to corresponding points in the 2D user interface, calculating a 2D cursor position from a 3D position detected based on 3D data from an acquisition system, [0066]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Dama, Schwarz, Yan 1, and Parashar to include mapping the user making of the 3D input gesture with the one or more input devices to one or more two-dimensional user interface elements because Dal Mutto suggests that this way, a user can easily make gestures in a 3D space without needing to make physical contact with a portion of the device, and these gestures need to be mapped to a 2D user interface in order to control motions within an application with respect to a 2D surface such as a display device [0003, 0057, 0066]. 21. As per Claims 12 and 19, these claims are each similar in scope to Claim 5, and therefore are rejected under the same rationale. 22. Claim(s) 6, 7, 13, 14, and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable Dama (US 20210377514A1), Schwarz (US 20180342103A1), and Yan 1 (US 20210110609A1) in view of da Silva Pratas Gabriel (US 20210266613A1). 23. As per Claim 6, Dama, Schwarz, and Yan 1 are relied on for teachings as discussed above relative to Claim 1. However, Dama, Schwarz, and Yan 1 do not teach adapting one or more background elements in a three-dimensional scene associated with the second modality based on the type of the client device. However, da Silva Pratas Gabriel teaches wherein the logic when executed is further operable to cause the one or more processors to perform operations comprising adapting one or more background elements in a three-dimensional scene associated with the second modality based on the type of the client device (generating of composite video 170 may involve rewriting the bitstream of background video 150, such rewriting may comprise changing parameters in the bitstream, e.g. high-level syntax parameters, such as tile locations and dimensions in the Picture Parameter Set, [0136], establish a visual rendering of a VR environment in which the composite video stream is displayed, the processor system may then output rendered image data to an HMD 650, [0164], VR may be used to render scenes which are represented by three-dimensional graphics, [0003]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Dama, Schwarz, and Yan 1 to include adapting one or more background elements in a three-dimensional scene associated with the second modality based on the type of the client device because da Silva Pratas Gabriel suggests that this way, each of the users is provided with a different background video that is displayed appropriately for each user [0158]. 24. As per Claim 7, Dama, Schwarz, and Yan 1 do not teach adapting at least one target object in a three-dimensional scene associated with the second modality based on the type of the client device. However, da Silva Pratas Gabriel teaches wherein the logic when executed is further operable to cause the one or more processors to perform operations comprising adapting at least one target object in a three-dimensional scene associated with the second modality based on the type of the client device (image data of the foreground video stream being inserted using translucency or other types of blending, [0135], generating a composite video stream which may combine a background video and a foreground video stream into one stream, Abstract, [0164, 0003]). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify Dama, Schwarz, and Yan 1 to include adapting at least one target object in a three-dimensional scene associated with the second modality based on the type of the client device because da Silva Pratas Gabriel suggests that this way, the foreground video is processed using translucency so that both the foreground video and background video can be correctly displayed, or the foreground video is blended with the background video in order to correctly display a composite video [0135]. 25. As per Claims 13-14, these claims are similar in scope to Claims 6-7 respectively, and therefore are rejected under the same rationale. As per Claim 20, Claim 20 is similar in scope to Claim 6, and therefore is rejected under the same rationale. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONI HSU whose telephone number is (571)272-7785. The examiner can normally be reached M-F 10am-6:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached at (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. JH /JONI HSU/Primary Examiner, Art Unit 2611
Read full office action

Prosecution Timeline

Nov 07, 2022
Application Filed
Jun 06, 2024
Non-Final Rejection — §103
Sep 12, 2024
Response Filed
Sep 12, 2024
Applicant Interview (Telephonic)
Sep 12, 2024
Examiner Interview Summary
Nov 22, 2024
Final Rejection — §103
Jan 17, 2025
Applicant Interview (Telephonic)
Jan 17, 2025
Examiner Interview Summary
Jan 22, 2025
Response after Non-Final Action
Jan 30, 2025
Request for Continued Examination
Jan 31, 2025
Response after Non-Final Action
Mar 25, 2025
Non-Final Rejection — §103
Jun 30, 2025
Response Filed
Oct 02, 2025
Final Rejection — §103
Feb 04, 2026
Request for Continued Examination
Feb 13, 2026
Response after Non-Final Action
Feb 20, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592028
METHODS AND DEVICES FOR IMMERSING A USER IN AN IMMERSIVE SCENE AND FOR PROCESSING 3D OBJECTS
2y 5m to grant Granted Mar 31, 2026
Patent 12586306
METHOD, ELECTRONIC DEVICE, AND COMPUTER PROGRAM PRODUCT FOR MODELING OBJECT
2y 5m to grant Granted Mar 24, 2026
Patent 12586260
CREATING IMAGE ENHANCEMENT TRAINING DATA PAIRS
2y 5m to grant Granted Mar 24, 2026
Patent 12581168
A METHOD FOR A MEDIA FILE GENERATING AND A METHOD FOR A MEDIA FILE PROCESSING
2y 5m to grant Granted Mar 17, 2026
Patent 12561850
IMAGE GENERATION WITH LEGIBLE SCENE TEXT
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
87%
Grant Probability
95%
With Interview (+7.2%)
2y 9m
Median Time to Grant
High
PTA Risk
Based on 848 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month