DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 02/20/2026 has been entered.
Response to Amendment
This is in response to applicant's amendment/response filed on 02/20/2026, which has been entered and made of record. Claims 1, 8, 10, 13 and 16 have been amended.
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 10748339 Although the claims at issue are not identical, they are not patentably distinct from each other because this application is a continuation of 15/445,806 (U.S. Patent No. 10748339) and this application claims with more words but in a broader manner the invention concisely claimed in 15/445,806.
The claims map to each other as follows:
Instant Application
U.S. Patent No. 10748339
Claim 1
A method performed by a computer that is part of an extended reality system, the method comprising:
implementing one or more source stories, each source story including one or more pieces of content and display media;
detecting a selection of a first source story of the one or more source stories;
identifying at least one first trigger that provides one or more display media related to the source story;
causing output of the one or more display media;
identifying one or more extended reality environments pertaining to the source story in a portion of the one or more display media by identifying at least one second
trigger depicted within the one or more display media, the one or more extended reality environments including a piece from the source story;
terminating presentation of the one or more display media; and
causing output of the one or more extended reality environments.
Claim 1
A computer-implemented method of enhancing a publication (as a part of an extended reality system) comprising:
Claim 7
7. The method of claim 1, wherein
the one or more augmented reality display media comprise a 360-degree video presenting one or more scenes of a story included in the publication from a first person point of view of a character included in the story.
Claim 1
a) receiving live camera feed video, captured by a user, comprising a plurality of triggers and video of a publication that has a plurality of components (reads on detecting a first story);
b) identifying, automatically, at least one first trigger in the live camera feed video by analyzing the live camera feed video, the at least one first trigger engaged to provide one or more augmented reality display media (trigger that provides and outputs display media);
c) identifying the one or more augmented reality display media associated with the at least one first trigger and pertaining to the publication, the one or more augmented reality displays media comprising one or more of the plurality of components of the publication;
d) presenting, via a first visual output device and to the user, one or more augmented reality display media;
e) identifying, in the one or more augmented reality display media, at least one digital second trigger comprising a direction and duration of interaction, the at least one digital second trigger engaged by user interaction with the plurality of components of the publication (at least one second
trigger depicted within the one or more display media; extended reality pertaining to a story in a publication that includes components of the story) in a portion of the one or more augmented reality displays within a user's field of view, the engaging of the digital second trigger causing identification of one or more virtual reality environments pertaining to the publication; and
f) terminating presentation of one or more augmented reality display media, and
seamlessly presenting to the user, via the first visual output device or a second visual output device, the one or more virtual reality environments so that the user is unaware of a transition between the one or more augmented reality display media and the one or more virtual reality environments.
Claim 2
The method of claim 1, wherein the source story is at least one of a television- based story, a video streaming service based story, or movie based story.
Claim 7
Claim 13
Claim 23
Claim 18
Claim 28
Dependent claims are rejected for depending from their corresponding independent claims.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1, 3-5, 7, 9, 13-14, 16-18 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Keane et al (US 20150220231 A1) in view of Li et al (US 20160314624 A1).
Regarding claim 1, Keane discloses a method performed by a computer that is part of an extended reality system (Keane [0092], “a method for generating and displaying holographic visual aids associated with a story … the process of FIG. 5A is performed by a mobile device”), the method comprising:
implementing one or more source stories, each source story including one or more pieces of content and display media (Keane [0029], “generating and displaying holographic visual aids associated with a story (implementing a story; the story comprises)”; [0031], “a "Jack and the Beanstalk" book (exemplary source story including pieces of content)”; [0087], “FIGS. 4A-4D provide examples of various environments in which one or more virtual objects associated with a story (exemplary display media included in a story)”);
detecting a selection of a first source story of the one or more source stories (Keane [0029], “the HMD detects the end user reading a story … The story may be embodied within a reading object”; [0094], “In step 504, a reading object is identified (identifying a reading object comprising a story and detecting an end user reading the story, reads on detecting a selection of a first source story) … A reading object may include a book (exemplary first story source)”);
identifying at least one first trigger that provides one or more display media related to the source story (Keane [0096], “one or more virtual objects may include a first set of virtual objects associated with a first triggering event (a trigger that provides one or more display media related to the source story) … first triggering event may also include the detection of the end user of the HMD gazing at or focusing on a particular portion of the reading object (exemplary identification of at least one trigger)”);
causing output of the one or more display media (Keane [0096], “A triggering event may determine when one of the first set of virtual objects is generated and displayed to an end user of an HMD (trigger causing output of the one or more display media).”);
identifying one or more extended reality environments pertaining to the source story, the one or more extended reality environments including a piece from the source story (Keane [0029], “generating and displaying holographic visual aids associated with a story while the story is being read.”; [0030], “an augmented reality system capable of (identifying an AR environment pertaining to a story) generating and displaying holographic visual aids related to a story”);
causing output of the one or more extended reality environments (Keane [0090], “FIG. 4C depicts one embodiment of an augmented reality environment 410 as seen by an end user wearing an HMD”)
Keane does not disclose (missing feature highlighted where applicable)
identifying one or more extended reality environments pertaining to the source story in a portion of the one or more display media by identifying at least one second trigger depicted within the one or more display media
terminating presentation of the one or more display media
However, Li discloses (missing feature highlighted where applicable)
identifying one or more extended reality environments pertaining to the source story in a portion of the one or more display media by identifying at least one second trigger depicted within the one or more display media
terminating presentation of the one or more display media
However, Li discloses (missing feature highlighted where applicable)
identifying one or more extended reality environments pertaining to the source story in a portion of the one or more display media by identifying at least one second trigger depicted within the one or more display media (Li fig. 8A-D; [0043], “In FIG. 8A, an AR scene 400.sub.AR may be triggered (exemplary first trigger) upon the user arriving at the location of the exhibit 410 … FIG. 8C shows a transition scene 400.sub.AR/VR which may gradually switch the scene from AR to VR mode … This switch may be achieved for example by user's activation via the virtual menu 430 (a portion of the one or more display media by identifying at least one exemplary second trigger depicted within the one or more display media)”);
terminating presentation of the one or more display media (Li [0043], “FIG. 8C shows a transition scene 400.sub.AR/VR which may gradually switch (switch/terminate presentation of one display media) the scene from AR to VR mode … Once the ambient environment is blocked out from the field of view 420, the device 110 may immerse the user within a VR scene 400.sub.VR (FIG. 8D)”)
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Keane with Li to generate virtual environments based on displayed virtual objects. This would have enhanced Keane by creating a more dynamic and interactive reality system.
Regarding claim 3, Keane in view of Li discloses the method of claim 1, further comprising:
dynamically shifting between the one or more display media and the one or more extended reality environments to provide an immersive interactive experience that allows a user to inhabit a role of the character in the source story (Keane [0042], “an augmented reality environment in which a virtual character's actions … The particular person's utterances may be used to synchronize and control the playback or display of predetermined animated sequences corresponding with portions of a story being read by the particular person (exemplary immersive interactive experience that allows a user to inhabit a role of the character in the source story).”; Li [0043], “a shuttering mechanism gradually blocking out ambient light as the device 110 switches from AR to VR mode. Once the ambient environment is blocked out from the field of view 420, the device 110 may immerse the user within a VR scene 400.sub.VR (FIG. 8D) (dynamically shifting between the display media and the one or more extended reality environments to provide an immersive interactive experience that allows a user to inhabit a role of the character in the source story)”).
Regarding claim 4, Keane in view of Li discloses the method of claim 1, further comprising:
dynamically shifting between the one or more display media and the one or more extended reality environments to immerse a user inside the piece from the source story (Keane [0042], “an augmented reality environment in which a virtual character's actions … The particular person's utterances may be used to synchronize and control the playback or display of predetermined animated sequences corresponding with portions of a story being read by the particular person (immerse a user inside the piece from the source story).”; Li [0043], “a shuttering mechanism gradually blocking out ambient light as the device 110 switches from AR to VR mode. Once the ambient environment is blocked out from the field of view 420, the device 110 may immerse the user within a VR scene 400.sub.VR (FIG. 8D) (dynamically shifting between the one or more display media and the one or more extended reality environments).).
Regarding claim 5, Keane in view of Li discloses the method of claim 1, further comprising:
creating a set of expanded pieces related to the source story, the set of expanded pieces associated with at least one trigger in the one or more extended reality environments (Keane [0031], “a personalized augmented reality environment wherein one or more virtual objects are generated … the one or more virtual objects may correspond with a predefined character animation (e.g., a virtual ogre mouthing "fee-fi-fo-fum") … The predefined character animation may be synchronized to a portion of a story corresponding with the character being animated (set of expanded objects related to the source story)”; Li [0033], “the objects may be fully synthesized digitally (set of expanded objects) … the objects may be interactive allowing the user to select an object … some interactions may trigger (set of expanded pieces associated with at least one trigger in the extended reality environments) a switch between the AR mode to the VR mode”);
identifying the at least one trigger in the one or more extended reality environments (Li [0033], “the objects may be interactive allowing the user to select an object … interactions may trigger (identifying at least one trigger in the extended reality environment) a switch between the AR mode to the VR mode”);
adapting at least one piece included in the set of expanded pieces into a three dimensional (3D) immersive piece (Li fig. 8D; [0043], “the device 110 switches from AR to VR mode. Once the ambient environment is blocked out from the field of view 420, the device 110 may immerse the user within a VR scene 400.sub.VR (FIG. 8D) placing the user within a synthesized digital environment displaying a virtual rendition 410.sub.VR of the dinosaur (adapting the dinosaur/at least one piece included in the set of expanded pieces into a three dimensional (3D) immersive piece) within for example a pre-historic setting.”); and
presenting, via the visual output device, the 3D immersive piece within the one or more extended reality environments (Li fig. 8D; [0043], “the device 110 switches from AR to VR mode. Once the ambient environment is blocked out from the field of view 420, the device 110 may immerse the user within a VR scene 400.sub.VR (FIG. 8D) placing the user within a synthesized digital environment displaying a virtual rendition 410.sub.VR of the dinosaur within for example a pre-historic setting (presenting, via the visual output device, the 3D immersive piece within the extended reality environment).”).
Regarding claim 7, Keane in view of Li discloses the method of claim 5, further comprising:
dynamically shifting between the one or more extended reality environments including the piece from the source story and the one or more extended reality environments including the 3D immersive piece to immerse a user in a universe of characters included in the source story (Li [0044], “the device 110 may immerse the user within a new VR scene 490.sub.VR which contains 410.sub.VR and 480.sub.VR (FIG. 8H), placing the user within a synthesized digital environment displaying virtual renditions 410.sub.VR and 480.sub.VR of the dinosaurs, and allowing the user to observe for example the two dinosaurs interacting with each other within a pre-historic setting.”).
Claim 13 recites an extended reality system which corresponds to the function performed by the method of claim 1. As such, the mapping and rejection of claim 1 above is considered applicable to the extended reality system of claim 13.
Additionally,
Keane discloses an extended reality system (Keane fig. 1) comprising
an imaging device (Keane [0044], “a capture device 213 (e.g., a front facing camera and/or microphone) in communication with processing unit 236. The capture device 213 may include one or more cameras for recording digital images and/or videos”);
an output device configured to output augmented reality display media and/or extended reality environments (Keane [0087], “one or more virtual objects associated with a story (e.g., a holographic visual aid for the story) may be generated and/or displayed to an end user of a head-mounted display device”); and
a computer comprising one or more processors and at least one memory including instructions, the instructions configured to cause the one or more processors to perform operations (Keane [0045], “Processing unit 236 may include one or more processors and a memory for storing computer readable instructions to be executed on the one or more processors”).
Regarding claim 14, Keane in view of Li discloses the extended reality system of claim 13, wherein the image device is a camera configured to capture a live camera feed including at least a portion of the image data (Keane [0047], “a field of view captured by the capture device 213 corresponds with the field of view as seen by an end user of HMD 200.”; [0062], “a front facing video camera 113 that can capture video and still images”).
Regarding claim 16, Keane in view of Li discloses the extended reality system of claim 13, wherein an aspect of the source story includes at least one of a character associated with the source story, a piece described in the source story, a location associated with the source story, or an object described in the source story (Keane [0090], “FIG. 4C depicts one embodiment of an augmented reality environment 410 as seen by an end user wearing an HMD (exemplary character associated with the source story)”).
Regarding claim 17, Keane in view of Li discloses the extended reality system of claim 13, further comprising
identifying, in the image data of the eyes of the user, a user response to the virtual overlay included in the one or more virtual display media (Keane [0048], “HMD 200 may perform gaze detection for each eye of an end user's eyes using gaze detection elements and a three-dimensional coordinate system in relation to one or more human eye elements such as a cornea center”; [0105], “triggering event may also include the detection of the end user of the HMD gazing”; Li fig. 1; [0033], “user may interact with object 130 (a cube). Interaction with the cube 130 (a user response to the virtual overlay included in the one or more virtual display media) (represented by a change in surface shading and shown as cube 130′) may trigger a switch from AR mode to VR mode (illustrated in the bottom picture). Object 140 is now shown in a VR scene as a fully synthesized cylinder 140′ ”); and
dynamically shifting between the one or more virtual display media and the one or more extended reality environments based on the user response (Li fig. 1; [0033], “user may interact with object 130 (a cube). Interaction with the cube 130 (represented by a change in surface shading and shown as cube 130′) may trigger a switch from AR mode to VR mode (illustrated in the bottom picture). Object 140 is now shown in a VR scene as a fully synthesized cylinder 140′ (dynamically shifting between the one or more virtual display media and the one or more extended reality environments based on the user response)”).
Claim 18 recites a non-transitory computer readable medium which corresponds to the function performed by the method of claim 1. As such, the mapping and rejection of claim 1 above is considered applicable to the non-transitory computer readable medium of claim 18.
Additionally,
Keane discloses a non-transitory computer readable medium (Keane [0053], “a computer program product may be stored on a variety of computer system readable media. Such media could be chosen from any available media that is accessible by the processor 125, including non-transitory, volatile and non-volatile media, removable and non-removable media.”) comprising
non-transitory computer readable medium including one or more sequences of instructions that, when executed by one or more processors, causes the one or more processors (Keane [0045], “Processing unit 236 may include one or more processors and a memory for storing computer readable instructions to be executed on the one or more processors”).
Regarding claim 20, Keane in view of Li discloses the non-transitory computer readable medium of claim 18, further comprising:
dynamically shift between the one or more display media and the one or more extended reality environments to provide an immersive interactive experience that allows a user to inhabit a role of the character in the source story (Keane [0042], “an augmented reality environment in which a virtual character's actions … The particular person's utterances may be used to synchronize and control the playback or display of predetermined animated sequences corresponding with portions of a story being read by the particular person (exemplary immersive interactive experience that allows a user to inhabit a role of the character in the source story).”; Li [0043], “a shuttering mechanism gradually blocking out ambient light as the device 110 switches from AR to VR mode. Once the ambient environment is blocked out from the field of view 420, the device 110 may immerse the user within a VR scene 400.sub.VR (FIG. 8D) (dynamically shifting between the display media and the one or more extended reality environments to provide an immersive interactive experience that allows a user to inhabit a role of the character in the source story)”).
Claims 2, 6, 8, 10-12, 15 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Keane in view of Li and further view of Haseltine (US 20150209664 A1).
Regarding claim 2, Keane in view of Li discloses the method of claim 1, but does not disclose wherein the source story is at least one of a television-based story, a video streaming service based story, or movie based story.
However, Haseltine discloses
the source story is at least one of a television-based story, a video streaming service based story, or movie based story (Haseltine [0133], “device could then continue to interact with the user as part of a storytelling experience … device could output the video and audio streams depicting the fictional character”).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Keane further with Haseltine to include streaming videos in the generation of reality environments. This would have enhanced Keane by providing a wide variety of content to users.
Regarding claim 6, Keane in view of Li discloses the method of claim 5, but does not disclose wherein the set of expanded pieces includes a backstory or sidestory associated with a particular character included in the source story.
However, Haseltine discloses
the set of expanded pieces includes a backstory or sidestory associated with a particular character included in the source story (Haseltine [0050], “one embodiment provides a story having one or more branches, where the story can proceed down one of a plurality of different arcs (side-stories associated with story characters)”).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Keane further with Haseltine to enable story branching. This would have enhanced Keane by enabling users to experience multiple storylines.
Regarding claim 8, Keane in view of Li discloses the method of claim 1, further comprising image data, wherein the image data is obtained from one or more live camera feeds (Keane [0047], “a field of view captured by the capture device 213 corresponds with the field of view as seen by an end user of HMD 200.”; [0062], “a front facing video camera 113 that can capture video and still images”), and
the one or more display media includes a morphed representation of a user that shows the user as a particular character in the source story (Keane [0029], “generating and displaying holographic visual aids associated with a story … control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud”; [0126], “FIGS. 7B and 7C depict one embodiment of a synchronized playback of three predefined holographic animations 752-754 (exemplary morphed representation of a user that shows the user as a particular character in the source story) based on the detection of utterances from a particular person 760.”);
Keane in view of Li does not disclose
wherein the image data is of a portion of a mirror; and
the method further comprising presenting the one or more display media by displaying a simulated reflection of the morphed representation of the user in the portion of the mirror to provide the user with a first-person point of view of the particular character in the source story.
However, Haseltine discloses
wherein the image data is of a portion of a mirror (Haseltine [0011], “apparatus that includes an enclosure having a one-way mirrored portion … the apparatus includes … a camera sensor configured to capture an image and to convert the captured image into an electronic signal (the image data is of a portion of a mirror).”)
the method further comprising presenting the one or more display media by displaying a simulated reflection of the morphed representation of the user in the portion of the mirror to provide the user with a first-person point of view of the particular character in the source story (Hazeltine [0047], “the magic mirror device could participate in a storytelling experience”; [0090], “a forest scene … is projecting an image of a faerie 420 flying through the air with pixie dust streaming behind … The projector object could then track Susie's movements (e.g., using one or more cameras) and could project the faerie image 420 in such a way that the faerie image 420 (exemplary simulated reflection of the morphed representation of the user) mimics the user's 410 movements (presenting the display media by displaying a simulated reflection of the morphed representation of the user in the portion of the mirror to provide the user with a first-person point of view of the particular character in the source story) … Doing so provides an immersive play environment in which the interactive devices can respond to a user in real-time and in a contextually appropriate manner.”).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Keane further with Haseltine to include a mirroring device for generating a reality environment. This would have improved Keane by adding a versatile way of presenting virtual environments.
Regarding claim 9, Keane in view of Li and further view of Haseltine discloses the method of claim 8, further comprising transitioning from the one or more live camera feeds to the one or more augmented reality display media including the simulated reflection of the morphed representation of the user to immerse the user in the source story (Keane [0047], “a field of view captured by the capture device 213 corresponds with the field of view as seen by an end user of HMD 200.”; [0062], “a front facing video camera 113 that can capture video and still images”; Keane [0029], “generating and displaying holographic visual aids associated with a story … control the playback speed of the predefined character animation in real-time such that the character is perceived to be lip-syncing the story being read aloud”; [0126], “FIGS. 7B and 7C depict one embodiment of a synchronized playback of three predefined holographic animations 752-754 (augmented reality display media including exemplary simulated reflection of the morphed representation of the user)”; Haseltine [0090], “a forest scene … is projecting an image of a faerie 420 flying through the air with pixie dust streaming behind … The projector object could then track Susie's movements (e.g., using one or more cameras) and could project the faerie image 420 in such a way that the faerie image 420 mimics the user's 410 movements … Doing so provides an immersive play environment in which the interactive devices can respond to a user in real-time and in a contextually appropriate manner.”); and
simulating one or more pieces in the story within the one or more display media with the user playing a role of the particular character in the source story (Haseltine [0090], “a forest scene … is projecting an image of a faerie 420 flying through the air with pixie dust streaming behind … The projector object could then track Susie's movements (e.g., using one or more cameras) and could project the faerie image 420 in such a way that the faerie image 420 mimics the user's 410 movements (user playing a role of the particular character in the source story) … Doing so provides an immersive play environment in which the interactive devices can respond to a user in real-time and in a contextually appropriate manner.”).
Regarding claim 10, Keane in view of Li discloses the method of claim 1,
But does not disclose
further comprising image data, the image data is of a portion of a mirror; and
the one or more extended reality environments include the portion of the mirror reproduced as a simulated mirror, wherein the simulated mirror displays a morphed representation of a user that shows the user as a particular character in the source story; and
the method further comprising presenting the one or more extended reality environments by displaying a simulated reflection of the simulated mirror including the morphed representation of the user to provide the user with a first-person point of view of the particular character in the source story.
However, Haseltine discloses
further comprising image data, the image data is of a portion of a mirror (Haseltine [0011], “apparatus that includes an enclosure having a one-way mirrored portion … the apparatus includes … a camera sensor configured to capture an image and to convert the captured image into an electronic signal (the image data is of a portion of a mirror).”); and
the one or more extended reality environments include the portion of the mirror reproduced as a simulated mirror, wherein the simulated mirror displays a morphed representation of a user that shows the user as a particular character in the source story (Hazeltine [0047], “the magic mirror device could participate in a storytelling experience”; [0090], “a forest scene … is projecting an image of a faerie 420 flying through the air with pixie dust streaming behind … The projector object could then track Susie's movements (e.g., using one or more cameras) and could project the faerie image 420 in such a way that the faerie image 420 (exemplary simulated reflection of the morphed representation of the user) mimics the user's 410 movements (presenting the display media by displaying a simulated reflection of the morphed representation of the user in the portion of the mirror to provide the user with a first-person point of view of the particular character in the source story) … Doing so provides an immersive play environment in which the interactive devices can respond to a user in real-time and in a contextually appropriate manner.”); and
the method further comprising presenting the one or more extended reality environments by displaying a simulated reflection of the simulated mirror including the morphed representation of the user to provide the user with a first-person point of view of the particular character in the source story (Hazeltine [0047], “the magic mirror device could participate in a storytelling experience”; [0090], “a forest scene … is projecting an image of a faerie 420 flying through the air with pixie dust streaming behind … The projector object could then track Susie's movements (e.g., using one or more cameras) and could project the faerie image 420 in such a way that the faerie image 420 (exemplary simulated reflection of the morphed representation of the user) mimics the user's 410 movements (provide the user with a first-person point of view of the particular character in the source story) … Doing so provides an immersive play environment in which the interactive devices can respond to a user in real-time and in a contextually appropriate manner.”).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Keane further with Haseltine to include a mirroring device for generating a reality environment. This would have improved Keane by adding a versatile way of presenting virtual environments.
Regarding claim 11, Keane in view of Li and further view of Haseltine discloses the method of claim 10, further comprising:
transitioning from the one or more live camera feeds to the one or more extended reality environments including the simulated mirror that displays the morphed representation of the user to immerse the user in the source story (Hazeltine [0047], “the magic mirror device could participate in a storytelling experience”; [0090], “a forest scene … is projecting an image of a faerie 420 flying through the air with pixie dust streaming behind … The projector object could then track Susie's movements (e.g., using one or more cameras) and could project the faerie image 420 in such a way that the faerie image 420 mimics the user's 410 movements … Doing so provides an immersive play environment in which the interactive devices can respond to a user in real-time and in a contextually appropriate manner.”).; and
simulating one or more pieces in the story within the one or more extended reality environments with the user playing a role of the particular character in the source story (Hazeltine [0090], “a forest scene … is projecting an image of a faerie 420 flying through the air with pixie dust streaming behind … The projector object could then track Susie's movements (e.g., using one or more cameras) and could project the faerie image 420 in such a way that the faerie image 420 mimics (simulating one or more pieces in the story within the one or more extended reality environments with the user playing a role of the particular character in the source story) the user's 410 movements … Doing so provides an immersive play environment in which the interactive devices can respond to a user in real-time and in a contextually appropriate manner.”.
Regarding claim 12, Keane in view of Li discloses the method of claim 1, further comprising:
But does not disclose
aggregating user experience settings that track a user response to stimuli included in the one or more display media and the one or more extended reality environments; and
modifying a presentation of the one or more display media or the one or more extended reality environments based on the user experience settings to fit a psychological condition of a user.
However, Haseltine discloses
aggregating user experience settings that track a user response to stimuli included in the one or more display media and the one or more extended reality environments (Haseltine [0036], “Interactive objects may also maintain historical data (aggregated user experience settings) characterizing previous interactions with users and other objects (aggregating user experience settings that track a user response to stimuli included in the one or more display media and the one or more extended reality environments). Such historical data may relate to … interactions with a specific user, interactions with other interactive objects, user behavior (user response to stimuli)”); and
modifying a presentation of the one or more display media or the one or more extended reality environments based on the user experience settings to fit a psychological condition of a user (Haseltine [0050], “a story having one or more branches, where the story can proceed down one of a plurality of different arcs … one of the arcs could be selected based on … the user's history of actions (modifying a presentation of the display media or the extended reality environments based on the user experience settings to fit a psychological condition of a user)”; [0084], “Interactive objects may also maintain historical data characterizing previous interactions with users … Such historical data may relate to … user behavior (a psychological condition of a user) ”).
It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Keane further with Haseltine to present new reality experiences based on user a history of past user behavior. This would have been done to present users with a customized and desirable virtual experience.
Claim 15 recites an extended reality system which corresponds to the function performed by the method of claim 2. As such, the mapping and rejection of claim 2 above is considered applicable to the extended reality system of claim 15.
Claim 19 recites a non-transitory computer readable medium which corresponds to the function performed by the method of claim 2. As such, the mapping and rejection of claim 2 above is considered applicable to the non-transitory computer readable medium of claim 19.
Response to Arguments
Applicant's arguments filed 02/20/2026 have been fully considered but they are moot in view of the amendments to the independent claims. The amendments necessitated further consideration, search and new ground of rejections. Additionally, on pg. 8,
The applicant argues:
Li is directed to a specific hardware-based solution for head-mounted displays involving a "shuttering mechanism". Li teaches that when a user selects a virtual object, the device closes physical shutters to block out ambient light and switches the display from an Augmented Reality (AR) mode to a Virtual Reality (VR) mode. Li thus discloses a global hardware mode-switch triggered by a user interaction event and Li does not teach or suggest the nested trigger content architecture (the first and second triggers) claimed by the Applicant, where the system analyzes the visual content of the display media itself to identify a second trigger depicted within it. In Li, the VR environment entirely replaces the AR view by closing shutters; the new environment is not found within a portion of the media itself via a nested graphical trigger. Thus, Keane and Li, alone or in combination fail to disclose each element of the independent claims.
Examiner respectfully disagrees: Li’s describes a shuttering mechanism as a way to switch between extended reality environments. The switch is accomplished using triggers within the displayed media, which enables a user to transition in an extended reality experience that is related to the display media. Therefore, Li, as cited reads on the broadly recited limitation in question.
Applicant further argues:
Furthermore, the prior art teaches away from the combination under MPEP 2143.01 when a reference suggests that the line of development flowing from the reference would require a different path than that taken by the applicant, or if combining the references would destroy the primary reference's intended purpose. In this case, a person of ordinary skill in the art would not be motivated to combine Keane and Li. Keane describes an augmented reality storybook designed for the seamless integration of digital content with a physical book. Li, conversely, teaches a system predicated on physical occlusion-specifically, shutters that physically block out the real world entirely to improve VR immersion. To combine these teachings would destroy the very purpose of Keane's invention. If a child were reading Keane's "Magic Book" and selected a character, the application of Li's teachings would result in the device's shutters closing, physically blinding the child to the physical book they are holding. This is not a "more dynamic" system; it is a system that creates a jarring disconnection from the physical publication. Therefore, Li teaches away from the seamless narrative integration sought by Keane and the independent claims are allowable over Keane and Li for this additional reason.
Examiner respectfully disagrees: In response to applicant’s argument that there is no teaching, suggestion, or motivation to combine the references, the examiner recognizes that obviousness may be established by combining or modifying the teachings of the prior art to produce the claimed invention where there is some teaching, suggestion, or motivation to do so found either in the references themselves or in the knowledge generally available to one of ordinary skill in the art. See In re Fine, 837 F.2d 1071, 5 USPQ2d 1596 (Fed. Cir. 1988), In re Jones, 958 F.2d 347, 21 USPQ2d 1941 (Fed. Cir. 1992), and KSR International Co. v. Teleflex, Inc., 550 U.S. 398, 82 USPQ2d 1385 (2007). In this case, Li’s describes a shuttering mechanism as a way to switch between extended reality environments. Applicant’s argument, “If a child were reading Keane's "Magic Book" and selected a character, the application of Li's teachings would result in the device's shutters closing, physically blinding the child to the physical book they are holding. This is not a "more dynamic" system; it is a system that creates a jarring disconnection from the physical publication.”, is not persuasive because switching between AR and VR environments is known in the art. A person reading a book may desire to switch to a VR environment from an AR view; in a VR view, as is known in the art, no external view may be present in a VR view and a transition to a VR view is accomplished by blocking out view of a real environment. Applicant’s assertion of a jarring disconnection is not substantiated and would not have prevented a person of ordinary skill in the art from combining Keane with Li. Therefore, applicant’s arguments with respect to teaching away are also not persuasive.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JITESH PATEL whose telephone number is (571)270-3313. The examiner can normally be reached 8am - 5pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JITESH PATEL/Primary Examiner, Art Unit 2612