Prosecution Insights
Last updated: April 19, 2026
Application No. 18/648,940

SYSTEMS AND METHODS FOR GENERATING VIRTUAL REALITY SCENES

Final Rejection §103§DP
Filed
Apr 29, 2024
Examiner
XU, XIAOLAN
Art Unit
2488
Tech Center
2400 — Computer Networks
Assignee
Adeia Guides Inc.
OA Round
2 (Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
87%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
247 granted / 334 resolved
+16.0% vs TC avg
Moderate +13% lift
Without
With
+13.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
37 currently pending
Career history
371
Total Applications
across all art units

Statute-Specific Performance

§101
6.3%
-33.7% vs TC avg
§103
49.7%
+9.7% vs TC avg
§102
20.0%
-20.0% vs TC avg
§112
13.4%
-26.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 334 resolved cases

Office Action

§103 §DP
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Arguments Applicant’s arguments with respect to claim(s) 1 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969). A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b). The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13. The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer. Claims 1 and 31-49 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims of U.S. Patent No. US 11228750 B1 and US 12003694 B2 in view of JIANG et al. (US 20200334893 A1). JIANG discloses the first virtual reality (VR) scene comprises: a first portal ([0075] In an application scenario, a user can see the initial virtual scene (for example, a house) and the two first scene conversion triggers (for example, two portals) of the initial virtual scene), and a second portal ([0075] In an application scenario, a user can see the initial virtual scene (for example, a house) and the two first scene conversion triggers (for example, two portals) of the initial virtual scene); and the second VR scene comprises: a third portal ([0080] The second scene conversion trigger set in the target virtual scene includes at least one second scene conversion trigger), and a fourth portal ([0080] The second scene conversion trigger set in the target virtual scene includes at least one second scene conversion trigger); and in response to a user interaction with the first portal, viewing of the first VR scene ends, and viewing of the second VR scene begins ([0058] a portal in the virtual scene is similar to a door, a gate, or an entrance, and user may enter another virtual scene by entering the portal; [0063] the scene conversion trigger may be a virtual portal, a cave entrance, a door, or a gate, etc., a user may enter a next virtual scene by using the scene conversion trigger; figure 1, [0075] Through each portal, the user can see different target virtual scenes. For example, through the first portal, the user can see partial information of a desert scene, and through the second portal, the user can see partial information of a forest scene; [0081] Render the determined target virtual scene and second scene conversion trigger set, and display the determined target virtual scene and second scene conversion trigger set in the screen area); in response to a user interaction with the second portal, viewing of the first VR scene ends, and viewing of a third VR scene, generated based at least in part on a third plurality of nouns identified in a third text portion of the textual document, begins (figure 1, [0075] Through each portal, the user can see different target virtual scenes. For example, through the first portal, the user can see partial information of a desert scene, and through the second portal, the user can see partial information of a forest scene; [0081] Render the determined target virtual scene and second scene conversion trigger set, and display the determined target virtual scene and second scene conversion trigger set in the screen area); in response to a user interaction with the third portal, viewing of the second VR scene ends, and viewing of the first VR scene is resumed ([0024] If one of the first scene conversion triggers is triggered, the terminal switches the virtual scene from the initial virtual scene to the target virtual scene associated with the first scene conversion trigger, and the target virtual scene and the second scene conversion trigger set in the target virtual scene are displayed on the screen area. In this case, the target virtual scene becomes the initial virtual scene, and correspondingly, the second scene conversion trigger set in the target virtual scene becomes the first scene conversion trigger set; conversion between a plurality of virtual scenes can be implemented, so that a user visually feels to travel amongst a plurality of virtual scenes; [0062] After switching from the desert scene to the land scene, the land scene becomes the initial virtual scene. Therefore, the initial virtual scene and target virtual scene are relative concepts, as after a scene conversion, a target virtual scene may be switched to an initial virtual scene); and in response to a user interaction with the fourth portal, viewing of the second VR scene ends, and viewing of a fourth VR scene, generated based at least in part on a fourth plurality of nouns identified in a fourth portion of the textual document, begins ([0024] If one of the first scene conversion triggers is triggered, the terminal switches the virtual scene from the initial virtual scene to the target virtual scene associated with the first scene conversion trigger, and the target virtual scene and the second scene conversion trigger set in the target virtual scene are displayed on the screen area. In this case, the target virtual scene becomes the initial virtual scene, and correspondingly, the second scene conversion trigger set in the target virtual scene becomes the first scene conversion trigger set; conversion between a plurality of virtual scenes can be implemented, so that a user visually feels to travel amongst a plurality of virtual scenes). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of (U.S. Patent No. US 11228750 B1 or US 12003694 B2) and JIANG, to use different portals to change to different scenes and to repeat actions throughout a scene, in order to have a better VR experience. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1, 32-38, 40, 42-48 are rejected under 35 U.S.C. 103 as being unpatentable over Coyne et al. (US 7962329 B1) in view of Harviainen (US 20220309743 A1) and JIANG et al. (US 20200334893 A1). Regarding claims 1, 40. Coyne discloses A method (abstract, A system for generating a scene description from a set of words; column 1 lines 31-32, a method of converting text into three-dimensional scene descriptions) comprising: generating, based on a first plurality of nouns identified in a textural document, a first virtual reality (VR) scene (column 2 lines 20-27, A system to generate arbitrary scenes in response to a substantially unlimited range of words; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations; column 10 lines 3-9, Consider again the following example sentences: "John said that the cat was on the table. The animal was next to a bowl of apples." In an embodiment, two scene descriptions may be generated; a first scene description for the first sentence and a second scene description for the second sentence; column 4 line 10-column 5 line 12, The following identifiers have the following meaning: … NP stands for noun phrase, … VP stands for verb phrase, …; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations) that comprises: a first plurality of VR objects, wherein the first plurality of objects is depicted as performing a first set of actions based on a first plurality of verbs identified in the textual document as related to the first plurality of nouns (column 2 line 65 - column 3 line 37, … Object database 40 may include a plurality of three-dimensional models for objects to be included in a low-level scene description. In addition to three-dimensional data, an embodiment may associate additional information with each three-dimensional model, such as a function of an object or its size. Pose database 42 may include poses for actions that may be typically associated with scene descriptions, such as jump, give, and carry. …; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations); and generating, based on a second plurality of nouns identified in the textual document, a second VR scene (column 2 lines 20-27, A system to generate arbitrary scenes in response to a substantially unlimited range of words; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations; column 10 lines 3-9, Consider again the following example sentences: "John said that the cat was on the table. The animal was next to a bowl of apples." In an embodiment, two scene descriptions may be generated; a first scene description for the first sentence and a second scene description for the second sentence; column 4 line 10-column 5 line 12, The following identifiers have the following meaning: … NP stands for noun phrase, … VP stands for verb phrase, …; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations) that comprises: a second plurality of VR objects, wherein the second plurality of objects is depicted as performing a second set of actions based on a second plurality of verbs identified in the textural document as related to the second plurality of nouns (column 2 line 65 - column 3 line 37, … Object database 40 may include a plurality of three-dimensional models for objects to be included in a low-level scene description. In addition to three-dimensional data, an embodiment may associate additional information with each three-dimensional model, such as a function of an object or its size. Pose database 42 may include poses for actions that may be typically associated with scene descriptions, such as jump, give, and carry. …; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations); Harviainen discloses wherein: the first set of actions are repeated while the first VR scene is being viewed ([0129] the run-time (loop) stage 732 may be “looped” (iteratively repeated) throughout a 3D-scene viewing experience of the user 702; [0102] a viewing client on the client device (e.g., a virtual reality (VR) client) may receive 3D scene data); JIANG discloses the first virtual reality (VR) scene comprises: a first portal ([0075] In an application scenario, a user can see the initial virtual scene (for example, a house) and the two first scene conversion triggers (for example, two portals) of the initial virtual scene), and a second portal ([0075] In an application scenario, a user can see the initial virtual scene (for example, a house) and the two first scene conversion triggers (for example, two portals) of the initial virtual scene); and the second VR scene comprises: a third portal ([0080] The second scene conversion trigger set in the target virtual scene includes at least one second scene conversion trigger), and a fourth portal ([0080] The second scene conversion trigger set in the target virtual scene includes at least one second scene conversion trigger); and in response to a user interaction with the first portal, viewing of the first VR scene ends, and viewing of the second VR scene begins ([0058] a portal in the virtual scene is similar to a door, a gate, or an entrance, and user may enter another virtual scene by entering the portal; [0063] the scene conversion trigger may be a virtual portal, a cave entrance, a door, or a gate, etc., a user may enter a next virtual scene by using the scene conversion trigger; figure 1, [0075] Through each portal, the user can see different target virtual scenes. For example, through the first portal, the user can see partial information of a desert scene, and through the second portal, the user can see partial information of a forest scene; [0081] Render the determined target virtual scene and second scene conversion trigger set, and display the determined target virtual scene and second scene conversion trigger set in the screen area); in response to a user interaction with the second portal, viewing of the first VR scene ends, and viewing of a third VR scene, generated based at least in part on a third plurality of nouns identified in a third text portion of the textual document, begins (figure 1, [0075] Through each portal, the user can see different target virtual scenes. For example, through the first portal, the user can see partial information of a desert scene, and through the second portal, the user can see partial information of a forest scene; [0081] Render the determined target virtual scene and second scene conversion trigger set, and display the determined target virtual scene and second scene conversion trigger set in the screen area); in response to a user interaction with the third portal, viewing of the second VR scene ends, and viewing of the first VR scene is resumed ([0024] If one of the first scene conversion triggers is triggered, the terminal switches the virtual scene from the initial virtual scene to the target virtual scene associated with the first scene conversion trigger, and the target virtual scene and the second scene conversion trigger set in the target virtual scene are displayed on the screen area. In this case, the target virtual scene becomes the initial virtual scene, and correspondingly, the second scene conversion trigger set in the target virtual scene becomes the first scene conversion trigger set; conversion between a plurality of virtual scenes can be implemented, so that a user visually feels to travel amongst a plurality of virtual scenes; [0062] After switching from the desert scene to the land scene, the land scene becomes the initial virtual scene. Therefore, the initial virtual scene and target virtual scene are relative concepts, as after a scene conversion, a target virtual scene may be switched to an initial virtual scene); and in response to a user interaction with the fourth portal, viewing of the second VR scene ends, and viewing of a fourth VR scene, generated based at least in part on a fourth plurality of nouns identified in a fourth portion of the textual document, begins ([0024] If one of the first scene conversion triggers is triggered, the terminal switches the virtual scene from the initial virtual scene to the target virtual scene associated with the first scene conversion trigger, and the target virtual scene and the second scene conversion trigger set in the target virtual scene are displayed on the screen area. In this case, the target virtual scene becomes the initial virtual scene, and correspondingly, the second scene conversion trigger set in the target virtual scene becomes the first scene conversion trigger set; conversion between a plurality of virtual scenes can be implemented, so that a user visually feels to travel amongst a plurality of virtual scenes). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Coyne, Harviainen and JIANG, to use different portals to change to different scenes and to repeat actions throughout a scene, in order to have a better VR experience. Regarding claims 32, 42. (New) Coyne discloses The method of claim 1, further comprising identifying a parameter referenced in the second text portion which corresponds to one of the plurality of first nouns (column 9 lines 56-65, Nouns can also corefer, as in the following example: John said that the cat was on the table. The animal was next to a bowl of apples. While it is not strictly required that the animal denote the cat mentioned in the first sentence of the above example, the coherence of the discourse depends upon the reader or listener making that connection), wherein the generating the first VR scene is further based on the identified parameter (column 10 lines 3-20, Consider again the following example sentences: "John said that the cat was on the table. The animal was next to a bowl of apples." In an embodiment, two scene descriptions may be generated; a first scene description for the first sentence and a second scene description for the second sentence. Each scene description, as discussed earlier, may include multiple scene description fragments. In the second scene description, the object corresponding to the animal, may be described by a list of possible animals from a set of three-dimensional models; the object also may contain an annotation for a POSSIBLE-COREFERENT. In this example, the POSSIBLE-COREFERENT would be the node corresponding to the cat. Thus, the description module (see section entitled Interpretation of the Scene Description into a Three-Dimensional Image) may make use of POSSIBLE-COREFERENT information. Instead of adding a new animal and putting it next to a bowl of apples, the description module may put the already existent cat next to a bowl of apples). Regarding claims 33, 43. (New) Coyne discloses The method of claim 1, further comprising identifying an adjective associated with one of the first plurality of nouns, wherein the generating the first VR scene is further based on the identified adjective (column 5 lines 25-30, A dependency structure is one possible representation of the semantic relations of a sentence. A dependency structure may enable the focusing of attention on certain dependents. For example, one might be interested in interpreting all adjectives that depend upon a noun (e.g., the large fat green cat)). Regarding claims 34, 44. (New) Coyne discloses The method of claim 1, further comprising generating a first content structure (column 10 lines 3-20, Each scene description may include multiple scene description fragments; abstract, generating a scene description from a set of words includes modules that perform the following functions: performing a linguistic analysis on a set of words to generate a structure representation of semantic relationships between words within the set of words) comprising a first object that matches a first one of the first plurality of nouns, wherein the first object comprises a first plurality of attribute table entries based on a second one of the first plurality of nouns and the first plurality of verbs (figure 2, column 6 lines 30-67, Returning now to FIG. 2, at step 56, the dependency structure may be converted to a scene description. The scene description may be a description of the objects to be depicted in the scene, and the relationships between the objects. An example of a scene description for the sentence: "John said that the cat is on the table." is given below: …; column 4 lines 19-22, This result indicates that John is a proper noun (NNP), said and was are past tense verbs (VBD), the is an article (DT), on is a preposition (IN) and cat and table are nouns (NN)). Regarding claims 35, 45. (New) Coyne discloses The method of claim 34, wherein the first plurality of attribute table entries is further based on at least one of a first one of the second plurality of nouns and a first one of the second plurality of verbs (column 14 lines 39-49, Implicit constraints. Implicit constraints are those constraints which may be imposed on objects because of the objects' usage in the context of the entered text. Consider the sentences: "The lamp is on the table. The glass is next to the lamp." It may be preferable not to have the glass floating in the air next to the lamp. Instead, the glass would preferably be put on the table. Therefore, an implicit constraint may be implemented which provides that "If X is next to Y, and X is not already on a surface, and X is not an airborne object (e.g., a helium balloon), then place X on the same surface as Y." Other implicit constraints may additionally be implemented). Regarding claims 36, 46. (New) Coyne discloses The method of claim 35, wherein the generating the first VR scene comprises generating the first VR scene based on the first content structure (abstract, generating a scene description from a set of words includes modules that perform the following functions: performing a linguistic analysis on a set of words to generate a structure representation of semantic relationships between words within the set of words). Regarding claims 37, 47. (New) Coyne discloses The method of claim 1, further comprising: parsing the textual document to identify a third text portion corresponding to the third scene (column 2 lines 20-27, A system to generate arbitrary scenes in response to a substantially unlimited range of words; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations; column 10 lines 3-9, Consider again the following example sentences: "John said that the cat was on the table. The animal was next to a bowl of apples." In an embodiment, two scene descriptions may be generated; a first scene description for the first sentence and a second scene description for the second sentence); identifying the third plurality of nouns referenced in the third text portion, and a third plurality of verbs related to the third plurality of nouns (column 4 line 10-column 5 line 12, The following identifiers have the following meaning: … NP stands for noun phrase, … VP stands for verb phrase, …; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations); parsing the textual document to identify a fourth text portion corresponding to the fourth scene (column 2 lines 20-27, A system to generate arbitrary scenes in response to a substantially unlimited range of words; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations; column 10 lines 3-9, Consider again the following example sentences: "John said that the cat was on the table. The animal was next to a bowl of apples." In an embodiment, two scene descriptions may be generated; a first scene description for the first sentence and a second scene description for the second sentence); identifying the fourth plurality of nouns referenced in the fourth text portion, and a fourth plurality of verbs related to the fourth plurality of nouns (column 4 line 10-column 5 line 12, The following identifiers have the following meaning: … NP stands for noun phrase, … VP stands for verb phrase, …; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations); wherein the third VR scene comprises a third plurality of objects based on the third plurality of nouns, each object of the third plurality of objects depicted as performing an action based on the third plurality of verbs (column 2 line 65 - column 3 line 37, … Object database 40 may include a plurality of three-dimensional models for objects to be included in a low-level scene description. In addition to three-dimensional data, an embodiment may associate additional information with each three-dimensional model, such as a function of an object or its size. Pose database 42 may include poses for actions that may be typically associated with scene descriptions, such as jump, give, and carry. …; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations); and wherein the fourth VR scene comprises a fourth plurality of objects based on the fourth plurality of nouns, each object of the fourth plurality of objects depicted as performing an action based on the fourth plurality of verbs (column 2 line 65 - column 3 line 37, … Object database 40 may include a plurality of three-dimensional models for objects to be included in a low-level scene description. In addition to three-dimensional data, an embodiment may associate additional information with each three-dimensional model, such as a function of an object or its size. Pose database 42 may include poses for actions that may be typically associated with scene descriptions, such as jump, give, and carry. …; column 9 lines 49-50, semantically interpreting words that denote particular objects, actions, or relations). Regarding claims 38, 48. (New) Coyne discloses The method of claim 1, further comprising parsing the textual document to identify a first location for the first VR scene and a second location for the second VR scene (column 1 lines 31-32, a method of converting text into three-dimensional scene descriptions; column 8 lines 49-59, verbs may include ACTIONLOCATION; column 9 lines 10-18, This semantic entry includes a set of verb frames, each of which defines the argument structure of one "sense" of the verb say. Arguments include an action location). Claims 31, 39, 41, 49 are rejected under 35 U.S.C. 103 as being unpatentable over Coyne et al. (US 7962329 B1) in view of Harviainen (US 20220309743 A1) and JIANG et al. (US 20200334893 A1) as applied above in claim 1, and further in view of Grigore (US 20200314477 A1). Regarding claims 31, 41. (New) Grigore discloses The method of claim 1, wherein the first VR scene, the second VR scene, and the fourth VR scene form a storyline ([0003] systems and methods for users to consider the individual storylines, navigate to the content with timelines they want to consume, and filter and watch content. [0076] The timeline may be given by actual storyline timeline (e.g., as presented in the episodes chronologically); [0047] The system may be a virtual reality device). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Coyne, Harviainen and JIANG with the invention of Grigore, to use a storyline to change scenes, in order to have a better VR experience. Regarding claims 39, 49. (New) Grigore discloses The method of claim 1, wherein the third VR scene includes a particular character that was in the first VR scene ([0003] systems and methods for users to consider the individual storylines, navigate to the content with timelines they want to consume, and filter and watch content. [0076] The timeline may be given by actual storyline timeline (e.g., as presented in the episodes chronologically); [0047] The system may be a virtual reality device; abstract, The storylines may include other attributes than chronological timelines such as characters; [0036] the system may allow the user to play particular chapters related to predefined attributes such as a particular character's presence). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to combine the inventions of Coyne, Harviainen and JIANG with the invention of Grigore, to include a particular character to change scenes, in order to have a better VR experience. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to XIAOLAN XU whose telephone number is (571)270-7580. The examiner can normally be reached Mon. to Fri. 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, SATH V. PERUNGAVOOR can be reached on (571) 272-7455. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /XIAOLAN XU/ Primary Examiner, Art Unit 2488
Read full office action

Prosecution Timeline

Apr 29, 2024
Application Filed
Mar 31, 2025
Non-Final Rejection — §103, §DP
Jul 03, 2025
Response Filed
Jul 03, 2025
Response after Non-Final Action
Sep 30, 2025
Response Filed
Dec 05, 2025
Final Rejection — §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12598315
IMAGE ENCODING/DECODING METHOD AND DEVICE FOR DETERMINING SUB-LAYERS ON BASIS OF REQUIRED NUMBER OF SUB-LAYERS, AND BIT-STREAM TRANSMISSION METHOD
2y 5m to grant Granted Apr 07, 2026
Patent 12586255
CONFIGURABLE POSITIONS FOR AUXILIARY INFORMATION INPUT INTO A PICTURE DATA PROCESSING NEURAL NETWORK
2y 5m to grant Granted Mar 24, 2026
Patent 12587652
IMAGE CODING DEVICE AND METHOD
2y 5m to grant Granted Mar 24, 2026
Patent 12581120
Method and Apparatus for Signaling Tile and Slice Partition Information in Image and Video Coding
2y 5m to grant Granted Mar 17, 2026
Patent 12581092
TEMPORAL INITIALIZATION POINTS FOR CONTEXT-BASED ARITHMETIC CODING
2y 5m to grant Granted Mar 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
87%
With Interview (+13.3%)
2y 11m
Median Time to Grant
Moderate
PTA Risk
Based on 334 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month