Prosecution Insights
Last updated: April 19, 2026
Application No. 18/363,076

Method and a System for Creating Persistent Augmented Scene Graph Information

Final Rejection §103
Filed
Aug 01, 2023
Examiner
CHU, DAVID H
Art Unit
2616
Tech Center
2600 — Communications
Assignee
Cisco Technology Inc.
OA Round
2 (Final)
78%
Grant Probability
Favorable
3-4
OA Rounds
2y 9m
To Grant
81%
With Interview

Examiner Intelligence

Grants 78% — above average
78%
Career Allow Rate
532 granted / 682 resolved
+16.0% vs TC avg
Minimal +3% lift
Without
With
+2.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 9m
Avg Prosecution
32 currently pending
Career history
714
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
57.8%
+17.8% vs TC avg
§102
18.1%
-21.9% vs TC avg
§112
5.5%
-34.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 682 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments filed 1/20/2026, with respect to claims 1-20 have been fully considered but are moot in view new ground(s) of rejection. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Iskandar et al. (US Patent No. 11157739) in view of Holzer et al. (PGPUB Document No. US 2015/0134651). Regarding claim 9, Iskandar teaches a system for creating persistent augmented scene graph information, the system comprising: One or more processors (processors 102 (Iskandar: col.6, line 35-43)); And one or more computer-readable non-transitory storage media in communication with the one or more processors and comprising instructions (computer-readable storage medium (Iskandar: col.6, line 35-43)), that when executed by the one or more processors, are configured to cause the system to: Obtain, by a host device (first device 302 (Iskandar: FIG.3A)), a plurality of real-world scene graphs of a physical environment from a plurality of computing devices, wherein each real-world scene graph corresponds to a point of view of its respective computing device (obtain on or more scene graphs from an external device (Iskandar: col.21, line 6-20). Note, three or more devices can be connected and participated in the computer graphics reality session (Iskandar: col.26, line 4-6). Therefore, scene graphs from more than one external device may be obtained (combined with) that of the first device 302); Detect a plurality of objects from the plurality of real-world scene graphs for the physical environment based on different points of view of plurality of computing devices (some objects are seen from both devices (Iskandar: col.16, line 15-25) and other objects that are only seen at one device based on the device POV (Iskandar: col.16, line 40-54). Note, a scene graph represents entities within the virtual environment (Iskandar: col.16, line 22-25), wherein entities are virtual objects in the virtual environment (Iskandar: col.9, line 60-62); Determine object data comprising geometrical information (shape), positional information (position), semantic information, and state information (state) of each object within the plurality of real-world scene graphs from different points of view (CGR data object comprising property data such as shape, position and state information (Iskandar: col.9, line 62-64 & col.10, line 4-8). CGR data object further includes reference information that points to the location of the data within memory (Iskandar: col.9, line 64-66)); Create composite object data of each object based on the object data (combining scene graphs from different devices (Iskandar: col.17, line 58-65) such as combined scene graph 306 comprising scene graph 316A from another device (Iskandar: col.21, line 6-20)), wherein the composite object data of each object is mapped to the plurality of real-world scene graphs (further the state of a scene graph such as lamp 208 is updated/synchronized across other devices (Iskandar: col.24, line 34-53), wherein the update in data across devices correspond to “data object is mapped to the real-world scene graph”) Create, by the host device (first device 302 (Iskandar: FIG.3A)), scene graph information based on the mapping of the composite object data (combine scene graph such as combined scene graph 306 comprising scene graph 316A from another device (Iskandar: 6-20)); And update the scene graph information to each of the plurality of computing devices, wherein the scene graph information corresponds to the full view of the physical environment (the resulting combined scene graph according to the teachings of Iskandar stated above). However, Iskandar does not expressly teach wherein the scene graph information represents a full view of the physical environment based at least in part of the different points of view of the plurality of computing devices (Holzer teaches the concept of a scene graph comprising multiple views for a complete view of the environment (Holzer: 0085)). Therefore, before the effective filing date of the claimed invention, it would have been obvious to one of an ordinary skill in the art to modify the teachings of Iskandar such that the types of scene graphs further include those disclosed by Holzer, because this enables an added variety of data (scene graph) types. Regarding claim 10, Iskandar teaches the system of claim 9, wherein the instructions, when executed by the one or more processors, are further configured to cause the system to: compute a logical relationship (even when tablet computer 206 is shared between devices, the content is only visible in a certain view (Iskandar: col.16, line 37-54)) and a spatial relationship between each object from the different points of view (shared objects such as window 216 only being viewed from one user is based on the spatial relationship (Iskandar: col.17, line 41-50)). Regarding claim 11, Iskandar teaches the system of claim 9, wherein the scene graph information is provided to each computing device for display in a corresponding point of view (the example in the rejection above pertaining to the window 216 and contents of the tablet computer 206 demonstrates the point of view determining the content to be displayed across the respective devices (Iskandar: col.16, line 37-54 & col.17, line 41-50)). Regarding claim 12, Iskandar teaches the system of claim 9, wherein the instructions, when executed by the one or more processors, are further configured to cause the system to: receive updated real-world scene graphs from one or more of the plurality of computing devices; and continuously update the composite object data based on the received updated real-world scene graphs (as stated in the rejection above to claim 9, the state of a scene graph such as lamp 208 is updated/synchronized across other devices (Iskandar: col.24, line 34-53), wherein the update in data across devices correspond to “data object is mapped to the real-world scene graph”). Regarding claim 13, Iskandar teaches the system of claim 12, wherein the instructions, when executed by the one or more processors, are further configured to cause the system to: compute an anchor point and an orientation point of an object based on a logical relationship and a spatial relationship between objects in each real-world scene graph (system 100 uses image sensor(s) 108 to track the position and orientation of display(s) 120 relative to one or more fixed objects in the real environment (Iskandar: col.7, line 35-48)). Regarding claim 14, Iskandar teaches the system of claim 13, wherein the instructions, when executed by the one or more processors, are further configured to cause the system to: determine type and characteristics of each computing device for updating the scene graph information (limiting what content is shared between devices based on privacy and security corresponds to updating scene graph information based on the privacy security level of the respective devices (Iskandar: col.9, line 42-45) such as the example of what contents of the tablet computer is shared based on the device (Iskandar: col.16, line 37-54)). Regarding claim 15, Iskandar teaches the system of claim 14, wherein the instructions, when executed by the one or more processors, are further configured to cause the system to: modify a particular real-world scene graph based on type and characteristics corresponding to a particular computing device using the scene graph information (any changes to the content of the tablet computer 206 will only be updated on the devices that the content is shared with (Iskandar: col.16, line 37-54 & col. 22, line 10-33)); and generate a particular AR scene graph according to a point of view of the particular computing device (the resulting views 202 and 204 shown in FIG.2 demonstrates the scene graph reflecting how content of the tablet computer is visible based on the user/views). Regarding claim 16, Iskandar teaches the system of claim 9, wherein the instructions, when executed by the one or more processors, are further configured to cause the system to: Generate audio-based output corresponding to the scene graph information according to a plurality of features of a particular computing device comprising type, characteristics, movement, position, intent of a wearer, time, and location within the physical environment (data object of Shaaban comprise of dynamic object behavior “which defines an object that that has a physical representation (e.g., collision shape, material) that is simulated by the physics subsystem—the results of the physics simulation are then used to update the graphics model, which is rendered by the graphics system. If there is a collision, it can trigger a sound effect, which is played back by the audio subsystem” (Iskandar: col.10, line 62 – col.11, line 17)). Claim(s) 1-8 are corresponding method claim(s) of claim(s) 9-16. The limitations of claim(s) 1-8 are substantially similar to the limitations of claim(s) 9-16. Therefore, it has been analyzed and rejected substantially similar to claim(s) 1-8. Claim(s) 17-20 are corresponding computer-readable media claim(s) of claim(s) 9, 4, 5 and 7 (computer-readable storage medium (Iskandar: col.6, line 35-43)). The limitations of claim(s) 17-20 are substantially similar to the limitations of claim(s) 9, 4, 5 and 7. Therefore, it has been analyzed and rejected substantially similar to claim(s) 17-20. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to David H Chu whose telephone number is (571)272-8079. The examiner can normally be reached M-F: 9:30 - 1:30pm, 3:30-8:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel F Hajnik can be reached at (571) 272-7642. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DAVID H CHU/Primary Examiner, Art Unit 2616
Read full office action

Prosecution Timeline

Aug 01, 2023
Application Filed
Sep 30, 2025
Non-Final Rejection — §103
Jan 14, 2026
Applicant Interview (Telephonic)
Jan 15, 2026
Examiner Interview Summary
Jan 20, 2026
Response Filed
Feb 07, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602881
ELECTRONIC DEVICE AND METHOD FOR PROVIDING AUGMENTED REALITY
2y 5m to grant Granted Apr 14, 2026
Patent 12591402
AUGMENTED REALITY COLLABORATION SYSTEM WITH ANNOTATION CAPABILITY
2y 5m to grant Granted Mar 31, 2026
Patent 12591695
METHOD OF IMAGE PROCESSING FOR THREE-DIMENSIONAL RECONSTRUCTION IN AN EXTENDED REALITY ENVIRONMENT AND A HEAD MOUNTED DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12524907
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND NON-TRANSITORY COMPUTER-EXECUTABLE MEDIUM
2y 5m to grant Granted Jan 13, 2026
Patent 12494011
RAY TRACING HARDWARE AND METHOD
2y 5m to grant Granted Dec 09, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
78%
Grant Probability
81%
With Interview (+2.7%)
2y 9m
Median Time to Grant
Moderate
PTA Risk
Based on 682 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month