Prosecution Insights
Last updated: April 19, 2026
Application No. 18/534,494

METAVERSE ENVIRONMENT READER AND NAVIGATION ASSISTANT

Final Rejection §103
Filed
Dec 08, 2023
Examiner
DEMETER, HILINA K
Art Unit
2617
Tech Center
2600 — Communications
Assignee
SAP SE
OA Round
2 (Final)
72%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
91%
With Interview

Examiner Intelligence

Grants 72% — above average
72%
Career Allow Rate
472 granted / 659 resolved
+9.6% vs TC avg
Strong +19% interview lift
Without
With
+19.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
27 currently pending
Career history
686
Total Applications
across all art units

Statute-Specific Performance

§101
8.7%
-31.3% vs TC avg
§103
61.0%
+21.0% vs TC avg
§102
14.5%
-25.5% vs TC avg
§112
6.7%
-33.3% vs TC avg
Black line = Tech Center average estimate • Based on career data from 659 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant's arguments filed on 01/23/2026 have been fully considered but they are not persuasive. On page 8, Applicant argues that the prior art does not teach “creating, by the metaverse environment reader, a description of the first scene based on the indexing of the first scene, wherein the description is an audio, heptic, or braille representation of the first scene”. In response: Alon in para. [0512] disclosed the system may be configured to output for display an index of the plurality of images of the plurality of complementary objects. An index may include a number, text, a symbol, etc. representing a corresponding image of a complementary object. The disclosed system may display the plurality of images of the complementary objects together with their respective indices on a display device. Also see para. [0274], note that displaying may include providing visual data (e.g., via projector or screen), audio data (e.g., by playing a sound on a speaker), and/or tactile data (e.g., via a haptic feedback device). Thus, the stated argument is not persuasive. On page 9, Applicant argues that the prior art does not teach “conveying, by the metaverse environment reader, the one or more electrical signals encoded with the description of the first scene to a user device to be presented on a user interface”. In response: Alon in para. [0287] explains that a scene may include a 3D representation of a living room encoded as a mesh as discussed above. To illustrate this principle, a mesh of a scene may in include establishing points that constitute a floor, a wall, a doorway, stairs, etc. para. [0497], note that the system may present a user interface that may display, for example, several cups and several suggested locations (e.g. next to a bottle, on the table). Note that the scene is presented for user. Thus, the stated argument is not persuasive. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 4-6, 11, 14-16 and 20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Alon et al. (US Publication Number 2021/0383115 A1, hereinafter “Alon”). (1) regarding claim 1: As shown in fig. 1, Alone disclosed a method (fig. 1, para. [0032], note that FIG. 1 depicts an exemplary system for augmenting or reconstructing a 2D or 3D scene or image), comprising: receiving, by a metaverse environment reader, a first scene of a metaverse (para. [0248], note that at step 402, visual input reconstruction system 120 may receive or retrieve an input scene. An input scene may be received or retrieved from a data storage, consistent with disclosed embodiments); performing, by the metaverse environment reader, semantic segmentation and object detection steps to identify a plurality of objects in the first scene (para. [0251], note that the visual input reconstruction system 120 may scan to detect an object such as a chair, a table, or a soda bottle. Other examples of objects are possible. Step 404 may include receiving or retrieving image object identifiers. Scanning may include object recognition algorithms (e.g., machine learning methods). Scanning may include determining or detecting an object insertion location); indexing, by the metaverse environment reader, the plurality of objects of the first scene based on the semantic segmentation and object detection steps (para. [0252], note that input reconstruction system 120 may extract an object from scene. Extracting may include generating or copying image object data related to an object detected in a scene. The system may extract 2D or 3D shapes of objects or elements or portion of the scenes. For example, the system may generate a model of a detected game chair. Also see para. [0254]); creating, by the metaverse environment reader, a description of the first scene based on the indexing of the first scene, wherein the description is an audio, haptic, or braille representation of the first scene (para. [0274], note that client device 110 may generate an interface at a headset, an LED screen, a touch screen, and/or any other screen. An interface may include input and output devices capable of receiving user input and providing information to a user, as described herein or any other type of interface. Displaying may include providing visual data (e.g., via projector or screen), audio data (e.g., by playing a sound on a speaker), and/or tactile data (e.g., via a haptic feedback device).); generating, by the metaverse environment reader, one or more electrical signals which include an encoding of the description of the first scene (para. [0286], note that a scene may be configured for display via a device, such as a headset, a computer screen, a monitor, a projection, etc. Aspects of a scene may be encoded in a known format, such as 3D vector format, a Computer-Aided Design file, .FLV, .MP4, .AVI, .MPG, .MP3, .MOV, .F4V, .VR, or any other image, video, or model format). Alon discloses most of the subject matter as described as above except for specifically teaching conveying, by the metaverse environment reader, the one or more electrical signals encoded with the description of the first scene to a user device to be presented on a user interface. However, Alon teaches conveying, by the metaverse environment reader, the one or more electrical signals encoded with the description of the first scene to a user device to be presented on a user interface (para. [0287], note that a scene may include a 3D representation of a living room encoded as a mesh as discussed above. To illustrate this principle, a mesh of a scene may in include establishing points that constitute a floor, a wall, a doorway, stairs, etc. para. [0497], note that the system may present a user interface that may display, for example, several cups and several suggested locations (e.g. next to a bottle, on the table). The system may also permit a user to modify the location where the added object (e.g. cup) may be inserted in the scene). At the time of filing for the invention, it would have been obvious to a person of ordinary skilled in the art to teach conveying, by the metaverse environment reader, the one or more electrical signals encoded with the description of the first scene to a user device to be presented on a user interface. The suggestion/motivation for doing so would have been in order to generating, augmenting or reconstructing a two-dimensional (2D) or three-dimensional (3D) scene or image. More particularly, the disclosed embodiments are directed to receiving a scene from an audiovisual environment and altering, augmenting, or reconstructing one or more portions of the received scene (para. [0002]). Therefore, it would have been obvious for Alon to obtain the invention as specified in claim 1. (2) regarding claim 4: Alone further disclosed the method of claim 1, further comprising: receiving, by the metaverse environment reader, a second scene of the metaverse, wherein objects in the second scene are labeled (para. [0329], note that at step 802, 3D generator system may receive a scene, consistent with disclosed embodiments. The scene may be a 2D or 3D scene. The scene may be incomplete (i.e., based on a scan that captures a partial representation of an object)); creating, by the metaverse environment reader, a second description of the second scene based on one or more labels of one or more objects in the second scene (para. [0330], note that at step 804, 3D generator system may segment a scene, consistent with disclosed embodiments. As described herein, segmenting may include partitioning (i.e., classifying) image elements of a scene into scene-components or objects such as table 806, sofa 808, chair 810, and/or other components or objects); generating, by the metaverse environment reader, one or more second electrical signals which include an encoding of the second description of the second scene (para. [0330], note that step 804 may include generating a mesh, point cloud, or other representation of a scene. A scene-component may include a complete object (e.g., a cup), a part of an object (e.g., a handle of a cup), or a partial representation of an object (e.g., a cup as-viewed from one side).); and conveying, by the metaverse environment reader, the one or more second electrical signals encoded with the second description of the first scene to the user device to be presented on the user interface (para. [0335], note that step 804 may include generating a mesh, point cloud, or other representation of a scene. A scene-component may include a complete object (e.g., a cup), a part of an object (e.g., a handle of a cup), or a partial representation of an object (e.g., a cup as-viewed from one side).). (3) regarding claim 5: Alone further disclosed the method of claim 4, further comprising generating a point of interest description describing metadata for the one or more objects in the second scene, wherein the metadata is not based on visual information of the second scene or of the one or more objects in the second scene (para. [0255], note that a scene may be classified as an indoor scene, an office scene, a reception hall scene, a sport venue scene, etc. Identifying or classifying a type of a scene in may be based on scene metadata describing or otherwise labeling a scene). (4) regarding claim 6: Alone further disclosed the method of claim 4, further comprising: identifying, by an object reader, the plurality of objects in the first scene (para. [0251], note that at step 404, visual input reconstruction system 120 may scan an input scene for an object); generating, by the object reader, descriptions of the plurality of objects in the first scene based on visual information associated with the plurality of objects (para. [0251], note that visual input reconstruction system 120 may scan to detect an object such as a chair, a table, or a soda bottle. Other examples of objects are possible. Step 404 may include receiving or retrieving image object identifiers. Scanning may include object recognition algorithms (e.g., machine learning methods)); and generating, by the object reader, descriptions of the one or more objects in the second scene from the one or more labels (para. [0255], note that scanning may include identifying or classifying a type of scene based on detected and identified objects within a scene (e.g., objects may be detected that are associated with a kitchen, and a scene may be identified as a kitchen scene accordingly).). The proposed rejection of claims 1 and 4-6, renders obvious the steps of claims 11, 14-16 and 20 because these steps occur in the operation of the proposed rejection as discussed above. Thus, the arguments similar to that presented above for claims 1 and 4-6 are equally applicable to claims 11, 14-16 and 20. Allowable Subject Matter Claims 2-3, 7-10, 12-13 and 17-19 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: the prior art made of record does not teach “determining, by the metaverse environment reader, an order of importance of the plurality of objects in the first scene based at least on a location and a size of each object of the plurality of objects; and sorting, by the metaverse environment reader, the plurality of objects of the first scene based on the determined order of importance”, as recited in claims 2 and 12. Claims 3, 7-10 and 13, 17-19 depend on claims 2 and 12. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Canny et al. (US Patent Number 12,154,139 B2) disclosed systems are disclosed for determining contextual segments for targeting contextual advertising in metaverses. The system can deploy an observer avatar in a metaverse to capture information inside a portion of a metaverse from behaviors and interactions of a target user avatar. Ummer (US Publication Number 2024/0295919 A1) disclosed a system for hosting a metaverse virtual conference (VC) includes a cloud server, at least one headset worn by a first participant, and at least one television device associated with a second participant. The television device includes a television screen, and an image capturing unit disposed on the television screen for capturing images of the second participant. THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communication from the examiner should be directed to Hilina K Demeter whose telephone number is (571) 270-1676. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, King Y. Poon could be reached at (571) 270- 0728. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about PAIR system, see http://pari-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /HILINA K DEMETER/Primary Examiner, Art Unit 2617
Read full office action

Prosecution Timeline

Dec 08, 2023
Application Filed
Oct 31, 2025
Non-Final Rejection — §103
Jan 23, 2026
Response Filed
Mar 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602864
EVENT ROUTING IN 3D GRAPHICAL ENVIRONMENTS
2y 5m to grant Granted Apr 14, 2026
Patent 12592042
SYSTEMS AND METHODS FOR MAINTAINING SECURITY OF VIRTUAL OBJECTS IN A DISTRIBUTED NETWORK
2y 5m to grant Granted Mar 31, 2026
Patent 12586297
INTERACTIVE IMAGE GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12579724
EXPRESSION GENERATION METHOD AND APPARATUS, DEVICE, AND MEDIUM
2y 5m to grant Granted Mar 17, 2026
Patent 12561906
METHOD FOR GENERATING AT LEAST ONE GROUND TRUTH FROM A BIRD'S EYE VIEW
2y 5m to grant Granted Feb 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
72%
Grant Probability
91%
With Interview (+19.4%)
3y 1m
Median Time to Grant
Moderate
PTA Risk
Based on 659 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month