Prosecution Insights
Last updated: April 19, 2026
Application No. 18/727,962

METHODS AND DEVICES FOR INTERACTIVE RENDERING OF A TIME-EVOLVING EXTENDED REALITY SCENE

Non-Final OA §103
Filed
Jul 10, 2024
Examiner
PATEL, SHIVANG I
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Interdigital Ce Patent Holdings SAS
OA Round
1 (Non-Final)
74%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
309 granted / 415 resolved
+12.5% vs TC avg
Strong +18% interview lift
Without
With
+18.5%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
22 currently pending
Career history
437
Total Applications
across all art units

Statute-Specific Performance

§101
10.3%
-29.7% vs TC avg
§103
57.8%
+17.8% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim 1-14 is/are rejected under 35 U.S.C. 103 as being obvious over Bouazizi et al (US 20220335694 A1) in view of Yip et al (US 20220337919 A1). Regarding claim 1, Bouazizi discloses A method for rendering an extended reality scene relative to a user in a timed environment ([0023] streaming immersive media content, e.g., for extended reality (XR) content, such as augmented reality (AR), mixed reality (MR), or virtual reality (VR) content), the method comprising: obtaining a description of the extended reality scene ([0092] information provided in the scene description may then be used by a presentation engine to render the 3D scene properly, using techniques like Physically-Based Rendering (PBR) that produce realistic scenes.), the description comprising: a scene tree describing at least one timed objects, virtual objects or relationships between objects ([0093] A scene description usually includes a scene graph, which is a directed acyclic graph, typically a plain tree-structure, that represents an object-based hierarchy of the geometry of a scene); behavior data items ([0118] runtime handles functionality such as frame composition, user-triggered actions, and tracking information), wherein a behavior data item comprising: at least an action, wherein an action is a description of a process to be performed by an extended reality engine on objects described by nodes of the scene tree ([0094] Scene graph also supports animations nodes that allow to change animation properties over time); and on condition that the at least a trigger of a behavior data item is activated, apply actions of the behavior data item ([0142] , when a user interacts with (e.g., makes contact with) a virtual object (as detected through collision in response to movement detected by camera 308, sensors 310, and/or user interface device 306), or through animations defined in scene graph). Yip discloses at least a trigger, wherein a trigger is a description of conditions ([0045] Support for dynamic scene updates: timed updates, or event (user interaction) triggered updates); a trigger being activated when its conditions are detected in the timed environment ([0071] Metadata related to the scene graph update describing operations, and/or conditions related to the update ) Bouazizi and Yip are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the immersive media content of Bouazizi to include at least a trigger, wherein a trigger is a description of conditions; a trigger being activated when its conditions are detected in the timed environment as described by Yip The motivation for doing so would have been for timed and event triggered updates in a scene description for extended reality (XR) multimedia (Yip, [0011]). Therefore, it would have been obvious to combine Bouazizi and Yip to obtain the invention as specified in claim 1. Regarding claim 2, Bouazizi discloses when a description of the extended reality scene is obtained, attributing an activation status set to false to at least one trigger of the description ([0142] Scene graph 324 may be updated according to various inputs, such as when a user moves their head (as detected by camera 308 and/or sensors 310), when a user interacts with (e.g., makes contact with) a virtual object); and when the conditions of the at least one trigger are met for a first time, setting the activation status of the trigger to true; and activating the trigger ([0145] Using these various elements of user input devices 306, a user may provide input representing virtual world movement, interaction with virtual objects (e.g., opening a door or chest), opening a menu, or the like) Regarding claim 3, Bouazizi discloses wherein, when the conditions of the at least one trigger are met, if the activation status of the trigger is set to true, activating the trigger only if the description of the trigger authorizes a second activation ([0152] Presentation unit 330 may determine an animation to render on the virtual object in response to such action, e.g., movement, rotation, deformation, or the like of the virtual object) Regarding claim 4, Bouazizi discloses wherein behavior data items comprise a priority parameter and, when the at least a trigger of at least two behavior data items is activated, applying the at least an action of one of the at least two behaviors data items according to the priority parameter of the at least two behavior data items ([0091] temporal sub-sequence may also include other pictures, such as P-frames and/or B-frames that depend from SAPs. Frames and/or slices of the temporal sub-sequence may be arranged within the segments such that frames/slices of the temporal sub-sequence that depend on other frames/slices of the sub-sequence can be properly decoded) Regarding claim 5, Bouazizi discloses a method for updating, at runtime, wherein a first description of an extended reality scene comprises behavior data items with a second description of the extended reality scene ([0023] streaming immersive media content, e.g., for extended reality (XR) content, such as augmented reality (AR), mixed reality (MR), or virtual reality (VR) content), wherein the method comprises, for each on-going behavior data item of the first description, if the on-going behavior data item is not appliable with the second description ([0116] The interaction with these different modalities is assured through dedicated interfaces to the presentation engine that consumes the scene description): stopping the on-going behavior data item; and applying the second description ([0057] Retrieval unit 52 may submit a request to leave the multicast group when data of the multicast group is no longer needed, e.g., to stop playback or to change channels to a different multicast group). Yip discloses processing an interrupt action if existing for the on-going behavior data item in the first description ([0066] timed media supported scene updates may be used to achieve a timed media supported dynamic scene triggered either through time in the presentation, or by user interaction events during the presentation.) Bouazizi and Yip are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the immersive media content of Bouazizi to include processing an interrupt action if existing for the on-going behavior data item in the first description as described by Yip The motivation for doing so would have been for timed and event triggered updates in a scene description for extended reality (XR) multimedia (Yip, [0011]). Therefore, it would have been obvious to combine Bouazizi and Yip to obtain the invention as specified in claim 5. Regarding claim 6, Bouazizi discloses A device for rendering an extended reality scene relative to a user in a timed environment ([0023] streaming immersive media content, e.g., for extended reality (XR) content, such as augmented reality (AR), mixed reality (MR), or virtual reality (VR) content), the device comprising a memory associated with a processor configured for: obtaining a description of the extended reality scene ([0092] information provided in the scene description may then be used by a presentation engine to render the 3D scene properly, using techniques like Physically-Based Rendering (PBR) that produce realistic scenes.), the description comprising: a scene tree describing at least one timed objects, virtual objects or relationships between objects ([0093] A scene description usually includes a scene graph, which is a directed acyclic graph, typically a plain tree-structure, that represents an object-based hierarchy of the geometry of a scene); behavior data items ([0118] runtime handles functionality such as frame composition, user-triggered actions, and tracking information), wherein a behavior data item comprising: at least an action, wherein an action is a description of a process to be performed by an extended reality engine on objects described by nodes of the scene tree ([0094] Scene graph also supports animations nodes that allow to change animation properties over time); and on condition that the at least a trigger of a behavior data item is activated, apply actions of the behavior data item ([0142] , when a user interacts with (e.g., makes contact with) a virtual object (as detected through collision in response to movement detected by camera 308, sensors 310, and/or user interface device 306), or through animations defined in scene graph). Yip discloses at least a trigger, wherein a trigger is a description of conditions ([0045] Support for dynamic scene updates: timed updates, or event (user interaction) triggered updates); a trigger being activated when its conditions are detected in the timed environment ([0071] Metadata related to the scene graph update describing operations, and/or conditions related to the update ) Bouazizi and Yip are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the immersive media content of Bouazizi to include at least a trigger, wherein a trigger is a description of conditions; a trigger being activated when its conditions are detected in the timed environment as described by Yip The motivation for doing so would have been for timed and event triggered updates in a scene description for extended reality (XR) multimedia (Yip, [0011]). Therefore, it would have been obvious to combine Bouazizi and Yip to obtain the invention as specified in claim 6. Regarding claim 7, Bouazizi discloses when a description of the extended reality scene is obtained, attributing an activation status set to false to at least one trigger of the description ([0142] Scene graph 324 may be updated according to various inputs, such as when a user moves their head (as detected by camera 308 and/or sensors 310), when a user interacts with (e.g., makes contact with) a virtual object); and when the conditions of the at least one trigger are met for a first time, setting the activation status of the trigger to true; and activating the trigger ([0145] Using these various elements of user input devices 306, a user may provide input representing virtual world movement, interaction with virtual objects (e.g., opening a door or chest), opening a menu, or the like) Regarding claim 8, Bouazizi discloses wherein, when the conditions of the at least one trigger are met, if the activation status of the trigger is set to true, activating the trigger only if the description of the trigger authorizes a second activation ([0152] Presentation unit 330 may determine an animation to render on the virtual object in response to such action, e.g., movement, rotation, deformation, or the like of the virtual object) Regarding claim 9, Bouazizi discloses wherein behavior data items comprise a priority parameter and, when the at least a trigger of at least two behavior data items is activated, applying the at least an action of one of the at least two behaviors data items according to the priority parameter of the at least two behavior data items ([0091] temporal sub-sequence may also include other pictures, such as P-frames and/or B-frames that depend from SAPs. Frames and/or slices of the temporal sub-sequence may be arranged within the segments such that frames/slices of the temporal sub-sequence that depend on other frames/slices of the sub-sequence can be properly decoded) Regarding claim 10, Bouazizi discloses a device for updating, at runtime, wherein a first description of an extended reality scene comprises behavior data items with a second description of the extended reality scene ([0023] streaming immersive media content, e.g., for extended reality (XR) content, such as augmented reality (AR), mixed reality (MR), or virtual reality (VR) content), wherein the method comprises, for each on-going behavior data item of the first description, if the on-going behavior data item is not appliable with the second description ([0116] The interaction with these different modalities is assured through dedicated interfaces to the presentation engine that consumes the scene description): stopping the on-going behavior data item; and applying the second description ([0057] Retrieval unit 52 may submit a request to leave the multicast group when data of the multicast group is no longer needed, e.g., to stop playback or to change channels to a different multicast group). Yip discloses processing an interrupt action if existing for the on-going behavior data item in the first description ([0066] timed media supported scene updates may be used to achieve a timed media supported dynamic scene triggered either through time in the presentation, or by user interaction events during the presentation.) Bouazizi and Yip are combinable because they are from the same field of invention. It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the immersive media content of Bouazizi to include processing an interrupt action if existing for the on-going behavior data item in the first description as described by Yip The motivation for doing so would have been for timed and event triggered updates in a scene description for extended reality (XR) multimedia (Yip, [0011]). Therefore, it would have been obvious to combine Bouazizi and Yip to obtain the invention as specified in claim 10. Regarding claim 11, Bouazizi discloses wherein behavior data items comprise a priority parameter and, when the at least a trigger of at least two behavior data items is activated, applying the at least an action of one of the at least two behaviors data items according to the priority parameter of the at least two behavior data items ([0116] interaction with these different modalities is assured through dedicated interfaces to the presentation engine that consumes the scene description. The interaction with the local real-world of the viewer is performed through the i−1 and i-i interfaces) Regarding claim 12, Bouazizi discloses wherein behavior data items comprise a priority parameter and, when the at least a trigger of at least two behavior data items is activated, applying the at least an action of one of the at least two behaviors data items according to the priority parameter of the at least two behavior data items ([0116] interaction with these different modalities is assured through dedicated interfaces to the presentation engine that consumes the scene description. The interaction with the local real-world of the viewer is performed through the i−1 and i-i interfaces) Regarding claim 13, Bouazizi discloses wherein behavior data items comprise a priority parameter and, when the at least a trigger of at least two behavior data items is activated, applying the at least an action of one of the at least two behaviors data items according to the priority parameter of the at least two behavior data items ([0116] interaction with these different modalities is assured through dedicated interfaces to the presentation engine that consumes the scene description. The interaction with the local real-world of the viewer is performed through the i−1 and i-i interfaces) Regarding claim 14, Bouazizi discloses wherein behavior data items comprise a priority parameter and, when the at least a trigger of at least two behavior data items is activated, applying the at least an action of one of the at least two behaviors data items according to the priority parameter of the at least two behavior data items ([0116] interaction with these different modalities is assured through dedicated interfaces to the presentation engine that consumes the scene description. The interaction with the local real-world of the viewer is performed through the i−1 and i-i interfaces) Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SHIVANG I PATEL whose telephone number is (571)272-8964. The examiner can normally be reached M-F 9-5am. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at (571) 272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /SHIVANG I PATEL/Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Jul 10, 2024
Application Filed
Mar 31, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602847
SYSTEMS AND METHODS FOR LAYERED IMAGE GENERATION
2y 5m to grant Granted Apr 14, 2026
Patent 12599838
APPARATUS AND METHODS FOR RECORDING AND REPORTING ABUSIVE ONLINE INTERACTIONS
2y 5m to grant Granted Apr 14, 2026
Patent 12592004
IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD
2y 5m to grant Granted Mar 31, 2026
Patent 12591947
DISTORTION-BASED IMAGE RENDERING
2y 5m to grant Granted Mar 31, 2026
Patent 12584296
Work Machine Display Control System, Work Machine Display System, Work Machine, Work Machine Display Control Method, And Work Machine Display Control Program
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+18.5%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month