DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1-20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1-20 of U.S. Patent No. 12,135,753 B2. Although the claims at issue are not identical, they are not patentably distinct from each other because every element of claims 1-20 of the pending application are present in the claims 1-20 of the ‘753 Patent. Comparison of claim 11 to claim 1 of the ‘753 Patent is illustrated below with the differences highlighted. As shown, while the first limitation is worded differently, the substance of the claimed subject matter remains the same.
Claim 11 of the instant application
Claim 1 of the ‘753 Patent
11. A method, comprising:
receiving, by a processing system including a processor, a metaverse profile associated with a user, wherein the metaverse profile includes information associated with second user capabilities and second content presentable within immersive media by an end user device of the user, wherein the receiving the immersive media enables first user capabilities and first content in the immersive media, wherein the immersive media is obtained over a network utilizing a first network slice, a first spectrum resource allocation, and a first Radio Access Technology (RAT);
initiating, by the processing system, at least one of a second network slice, a second spectrum resource allocation, or a second RAT based on the metaverse profile;
detecting, by the processing system, a triggering event associated with the immersive media, wherein the initiating is prior to the detecting of the triggering event; and
continuing providing, by the processing system, the immersive media for presentation by the end user device, wherein the continuing providing the immersive media is according to the metaverse profile, wherein the continuing providing the immersive media is over the network utilizing the at least one of the second network slice, the second spectrum resource allocation, or the second RAT.
1. A method, comprising:
providing, by a processing system including a processor, immersive media for presentation by an end user device of a user, wherein the providing the immersive media enables first user capabilities and first content in the immersive media, wherein the providing the immersive media is over a network utilizing a first network slice, a first spectrum resource allocation, and a first Radio Access Technology (RAT);
receiving, by the processing system, a request;
responsive to the request, obtaining, by the processing system, a metaverse profile associated with the user, wherein the metaverse profile includes information associated with second user capabilities and second content presentable in the immersive media;
initiating, by the processing system, at least one of a second network slice, a second spectrum resource allocation, or a second RAT based on the metaverse profile;
detecting, by the processing system, a triggering event associated with the immersive media, wherein the initiating is prior to the detecting of the triggering event;
continuing providing, by the processing system, the immersive media for presentation by the end user device, wherein the continuing providing the immersive media is according to the metaverse profile, wherein the continuing providing the immersive media is over the network utilizing the at least one of the second network slice, the second spectrum resource allocation, or the second RAT;
monitoring, by the processing system, one of user actions, user interactions, or both user actions and interactions of the user with the immersive media to obtain monitored activity; and
applying, by the processing system, a machine learning model to the monitored activity to the detecting the triggering event and to detect a transition event in the immersive media, wherein the immersive media is one of virtual reality or augmented reality, and wherein the obtaining the metaverse profile is in response to the transition event.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
Claim 3 is rejected under 35 U.S.C. 112(b) as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor regards as the invention.
As to claim 3, the phrase "one of" at the end of the preamble renders the claim indefinite because the claim does not appear to contain a listing of elements separated by “and” or "or" to denote choices to select from. For purposes of examination, the claim will be interpreted as one of: user actions or user interactions.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-8 and 10-20 are rejected under 35 U.S.C. 103 as being unpatentable over Tan et al. (US 2023/0350487 A1) in view of Dowlatkhah et al. (US 2020/0311993 A1).
As per claim 1, Tan teaches a device (Fig. 1), comprising:
a processing system including a processor (Fig. 1); and
a memory that stores executable instructions that, when executed by the
processing system, facilitate performance of operations (par. [0019]; Fig. 1), the
operations comprising:
obtaining a second metaverse profile associated with a user [a second type of extended reality], wherein the second metaverse profile includes second information associated with second user capabilities and second content presentable in the immersive media provided for presentation by an end user device of the user [the second type of extended reality could be a mixed reality that merges real world images with virtual images allowing the user to see portions of the real world while moving] (par. [0038]) according to a first metaverse profile of the user [a first type of extended reality] that includes first information associated with first user capabilities and first content in the immersive media [as the user moves from one location to another while wearing the head-mounted display device, the user switches from the first type of extended reality which could be a virtual reality to the second type of extended reality which could be a mixed reality] (par. [0038]-[0039]), and wherein the immersive media is provided over a network utilizing a first network slice, a first spectrum resource allocation, and a first Radio Access Technology (RAT) [performing communication over a shared communication frequency band] (par. [0021]-[0022], [0026], [0059]);
initiating at least one of a second network slice, a second spectrum resource
allocation, or a second RAT based on the second metaverse profile [head-mounted display device may be suited to plural RF bands and ready to communicate utilizing different means based on the environment where the display device might be utilized while the user is moving] (par. [0031], [0060], [0081], [0095]);
detecting a triggering event associated with the immersive media, wherein the
initiating is prior to the detecting of the triggering event [detecting an extended reality switching input, where the system is ready to change means of communication such as RF bands] (Fig. 5 steps 510 and 515, par. [0105]); and
continuing providing the immersive media for presentation by the end user
device, wherein the continuing providing the immersive media is according to the
second metaverse profile responsive to the triggering event (Fig. 5 step 520; par. [0106]), wherein the continuing providing the immersive media is over the network (par. [0002], [0005], [0042]; Figs. 1 and 2).
Tan does not expressly teach continuing providing the immersive media utilizing the at least one of the second network slice, the second spectrum resource allocation, or the second RAT.
Dowlatkhah is directed to allocating data for augmented reality or other next generation network (abstract). In particular, Dowlatkhah teaches continuing providing an immersive media utilizing the at least one of a second network slice, a second spectrum resource allocation, or a second RAT [the network slice can be dedicated (thus, continuing) for a specific network function (e.g., extended reality (immersive media), augmented reality to manage and allocate network resources] (par. [0060], [0061], [0063]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method and system of Tan by continuing providing an immersive media utilizing the at least one of a second network slice, a second spectrum resource allocation, or a second RAT in order to divide the access network by slices to separately address multiple needs (par. [0060]-[0061], [0064] in Dowlatkhah).
As to claim 2, Tan teaches that the second information comprises an identification of the second user capabilities, an identification of the second content, one or more network parameters associated with at least one of the second user capabilities or the second content, one or more QoS parameters associated with at least one of the second user capabilities or the second content, an identification of one or more applications for presenting at least one of the second user capabilities or the second content, or any combination thereof [the SLAM process is initiated that maps the physical environment and prepares a digital image of the physical world] (par. [0042]).
As to claim 3, Tan teaches that the immersive media is one of virtual reality or augmented reality (par. [0011], [0026], [0040]) and wherein the operations further comprise one of: applying a machine learning model to user actions and user interactions of the user with the immersive media to detect a transition event and to detect the triggering event, wherein the obtaining the second metaverse profile is in response to the transition event (par. [0036]).
As to claim 4, Tan teaches monitoring one of user actions, user interactions or both user actions and interactions of the user with the immersive media to obtain monitored activity [monitoring for the switching event] (par. [0036]); and applying a machine learning model to the monitored activity to detect a transition event in the immersive media, wherein the obtaining the metaverse profile is in response to the transition event [machine learning gesture detection algorithm] (par. [0036]).
As to claim 5, Tan teaches applying a machine learning model to user actions and user interactions of the user with the immersive media to determine profile adjustments; and adjusting at least one of the first or second metaverse profiles according to the profile adjustments.
As to claim 6, Tan teaches that the obtaining of the second metaverse profile comprises: selecting a plurality of metaverse profiles from a group of metaverse profiles of the user [VR from group of VR, AR and MR] (Fig. 5, 509), wherein the group of metaverse profiles is generated prior to the providing of the immersive media (Fig. 5/510); and generating the second metaverse profile based on a portion of user capabilities and a portion of content associated with the plurality of metaverse profiles [the user may choose to cause a videoconference application to be executed and engage in a videoconference with another user that allows for mixed reality (MR) images to be shared between the users (thus, indicating metaverse profile of mixed reality and videoconference capability of second user). The extended reality switching system 152 may detect such changes in the application being executed and switch from the first type of extended reality to the second type of extended reality accordingly] (par. [0039]).
As to claim 7, Tan teaches that the immersive media is one of virtual reality or augmented reality (par. [0011], [0026], [0040]), and wherein the operations further comprise: monitoring user actions and user interactions of the user with the immersive media to detect a transition event and to detect the triggering event (par. [0106]; Fig. 5/510 and 515); wherein the obtaining the second metaverse profile is in response to the transition event (par. [0106]; Fig. 5/510 and 515); and wherein a selection of the portion of the user capabilities and the portion of the content defined in the plurality of metaverse profiles is based on the monitoring of the user actions and the user interactions [the extended reality switching system 152 may provide a user with capability of dynamically switching (user capability) from viewing a VR simulation (first content in immersive media), a MR simulation, or an AR simulation to one of the different simulations] (par. [0011], [0026], [0040]).
As to claim 8, Tan teaches that the operations further comprise: subsequently continuing providing the immersive media for presentation by the end user device, wherein the subsequently continuing providing the immersive media is performed without utilizing the second metaverse profile [where no triggering event is detected at block 515, the method 500 may return to block 510 to continue with the VR training session] (par. [0104]).
As to claim 10, Tan teaches that the continuing providing the immersive media according to the second metaverse profile includes providing in the immersive media at least a portion of: the first user capabilities, the first content, or a combination thereof (Fig. 5/520; par. [0106]).
As to claim 11, Tan teaches a method, comprising:
receiving, by a processing system including a processor, a metaverse profile associated with a user [a first type of extended reality], wherein the metaverse profile includes information associated with second user capabilities and second content presentable within immersive media by an end user device of the user, wherein the receiving the immersive media enables first user capabilities and first content in the immersive media [as the user moves from one location to another while wearing the head-mounted display device, the user switches from the first type of extended reality which could be a virtual reality to the second type of extended reality which could be a mixed reality] (par. [0038]-[0039]), wherein the immersive media is obtained over a network utilizing a first network slice, a first spectrum resource allocation, and a first Radio Access Technology (RAT) [performing communication over a shared communication frequency band] (par. [0021]-[0022], [0026], [0059]).
Tan also teaches the steps of initiating, detecting, and continuing providing, as discussed per claim 1 above.
Tan does not expressly teach continuing providing the immersive media utilizing the at least one of the second network slice, the second spectrum resource allocation, or the second RAT.
Dowlatkhah is directed to allocating data for augmented reality or other next generation network (abstract). In particular, Dowlatkhah teaches continuing providing an immersive media utilizing the at least one of a second network slice, a second spectrum resource allocation, or a second RAT [the network slice can be dedicated (thus, continuing) for a specific network function (e.g., extended reality (immersive media), augmented reality to manage and allocate network resources] (par. [0060], [0061], [0063]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method and system of Tan by continuing providing an immersive media utilizing the at least one of a second network slice, a second spectrum resource allocation, or a second RAT in order to divide the access network by slices to separately address multiple needs (par. [0060]-[0061], [0064] in Dowlatkhah).
As to claim 12, Tan teaches receiving, by the processing system, a request from the end user device (par. [0042]), wherein the information comprises an identification of the second user capabilities, an identification of the second content (par. [0039]), one or more network parameters associated with at least one of the second user capabilities or the second content, one or more QoS parameters associated with at least one of the second user capabilities or the second content, an identification of one or more applications for presenting at least one of the second user capabilities or the second content, or any combination thereof.
As to claim 13, Tan teaches that the providing the immersive media enabling the first user capabilities and the first content in the immersive media (par. [0011], [0026], [0040]) is according to a default metaverse profile that is generated by equipment of a first entity [generated via SLAM process] (par. [0012], [0101]) that operates the network, wherein the default metaverse profile includes default information [after HMO is powered on, a SLAM process may be initiated that maps the physical environment and presents a digital image of the physical world or relative to landmarks in the physical environment as a virtual environment (default metaverse profile established) to the user; via modeling model the surrounding environment as viewed from the perspective of the headset wearer, and render the modeled image and virtual elements (default information) in a three-dimensional environment matching or relative to the surrounding real-world environment] (par. [0012], [0101]) associated with the first user capabilities and the first content, (par. [0011], [0026], [0040]) and wherein the second user capabilities and the second content in the immersive media of the metaverse profile is provided by equipment of a second entity operating as a content provider that is distinct from the first entity [the user may choose to cause a videoconference application to be executed ( also viewed as a request) and engage in a videoconference with another user that allows for mixed reality images to be shared between the users (thus, indicating metaverse profile of mixed reality and videoconference capability of second user] (par. [0039]).
As to claim 14, Tan teaches all the elements as discussed per claim 4 above.
As to claim 15, Tan teaches that a first entity operates the network [engine and CPU/GPU to conduct (operates) the SLAM computations] (par. [0012], [0031]), wherein the providing the immersive media enabling the first user capabilities and the first content in the immersive media (par. [0011], [0026], [0040]) is according to another metaverse profile that includes first information associated with the first user capabilities and the first content (par. [0011], [0026], [0040]), wherein the first user capabilities and the first content in the immersive media of the other metaverse profile is provided by equipment of a second entity operating as a content provider that is distinct from the first entity [this computer-generated perceptual information may include multiple sensory modalities such as visual, auditory, haptic, somatosensory and even olfactory modalities. The AR simulation may, therefore, include a projection of a real-world environment with information or objects added virtually as an overlay. MR simulations may include a merging of real-world images captured by the camera and virtual, computer] (par. [0040]), and wherein the second user capabilities and the second content in the immersive media of the metaverse profile is provided by equipment of a third entity [the user may choose to cause a videoconference application to be executed (also viewed as a request) and engage in a videoconference with another user that allows for mixed reality images to be shared between the users (thus, indicating metaverse profile of mixed reality and videoconference capability of second user] (par. [0039], [0092]) operating as another content provider that is distinct from the first and second entities [via the server (par. [0092]).
As to claim 16, Tan teaches all the elements as discussed per claim 10 above.
As to claim 17, Tan teaches all the elements as discussed per claim 6 above.
As to claim 18, Tan in view of Dowlatkhah teaches a non-transitory machine-readable medium, comprising executable instructions that, when executed by a processing system including a processor, facilitate performance of operations [operations via executing code/software] (par. [0019] in Tan) the operations comprising the method steps as discussed per claims 1 and 11 above.
As to claim 19, Tan teaches all the elements as discussed per claim 6 above.
As to claim 20, Tan teaches all the elements as discussed per claim 4 above.
Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over Tan et al. in view of Dowlatkhah et al. and in further view of Mossoba et al. (US 2023/0315243 A1).
As to claim 9, Tan teaches that the operations further comprise: the subsequently continuing providing the immersive media without utilizing the second metaverse profile, (par. [0104]).
Tan in view of Dowlatkhah fails to teach receiving a termination request from the
end user device and continuing media is in response to the termination request.
Mossoba teaches receiving a termination request from the end user device and continuing media is in response to the termination request [via termination of a portion of the displayed content, thus continuing the other portion] (par. [0037], [0086]).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of Tan in view of Dowlatkhah by receiving a termination request from the end user device and continuing media is in response to the termination request in order to allow the deactivation component to communicate with an augmented reality device and request termination of the presentation of all or a portion of the displayed content (Mossoba: par. [0037]).
Related Prior Art
Lyle et al. (US 2009/0158150 A1) is directed to providing a gameplay in a metaverse application (abstract). In particular, Lyle teaches permitting the character to switch between multiple avatar profiles during the gameplay (par. [0010], [0029], [0032]). Therefore, teachings of Lyle are applicable to the subject matter of the pending claims and Lyle can be used in the rejection of claims 1-20.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to OLEG SURVILLO whose telephone number is (571)272-9691. The examiner can normally be reached 9:00am - 5:00pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ario Etienne can be reached at 571-272-4001. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/OLEG SURVILLO/ Primary Examiner, Art Unit 2457