Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Objections
Claim 22 is objected to because of the following informalities: The claim appears to have a typographical error in its last line “[78]-[79].”. Appropriate correction is required.
Response to Arguments
Applicant's arguments filed June 20, 2025 have been fully considered but they are not persuasive. Applicant’s arguments are directed to the amendments and newly added claims 21-23. Examiner respectfully submits the cited references clearly teach the limitations in question.
That is, the references teach the limitations of the independent claims to integrate, using the quantum computing component, the extended reality session with the one or more dimensions in real time (Peleg FIGS. 39-48 and direction, location, and attention for producing virtual audio output at [0567]-[0575] and [0623]-[0624], and quantum and artificial intelligence computing disclosed at [0636] and [0137] [0166] and [0137]-[0140] and [0244]), wherein the integration comprises a seamless integration between the one or more dimensions in the extended reality session (Peleg FIGS. 39-48 and direction, location, and attention tracking for audio output at [0567]-[0575] and [0602]-[0607] of which various embodiments are provided and [0623]-[0624]; and describing continuous analysis of input data to predict and produce continuous output (e.g., seamless) at [0680]-[0681]). Therefore, the rejection of these independent claims and all claims depending therefrom stand rejected.
Furthermore, with regard to claims 21-23, these newly added claims are properly addressed below using the references as properly cited. Therefore, these claims stand as properly rejected as well.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-2, 5-12, 14-17 and 19-23 are rejected under 35 U.S.C. 103 as being unpatentable over Peleg et al., US 2023/0135787 A1 (hereinafter “Peleg”) in view of Berliner et al., US 2023/0152851 A1 (hereinafter “Berliner”).
Regarding claim 1, Peleg discloses a system (FIG. 2 system 200 at [0086]) for dynamically modifying extended reality environments and determining modified usages in real time using quantum computing (data using n-ary qubits as disclosed at [0150] – as known in the art, qubit being a measure of quantum computing; [0005]-[0015] discussing various way of modifying extended reality system based on usages, use of neural network determination and machine learning algorithms at least at [0137], [0140] and [0488], XR unit 204 at FIG. 2 as part of the system 200 at least at [0086]-[0087] generally), the system comprising:
a processing device (FIG. 4 with processing device 460 of XR unit 204 [0117]-[0118] and [0125]);
a non-transitory storage device containing instructions (FIG. 4 and memory device 411 at [0118]) when executed by the processing device, causes the processing device to perform the steps of:
identify an extended reality session comprising an at least one extended reality interaction by a user (input determination module 412 capable of detecting type of input by the user as disclosed at [0119]);
determine, based on the at least one extended reality interaction (FIG. 8 head tilt at [0189]), a user intent for the extended reality session (FIG. 8 head tilt at [0189] and processing device determines what the user wishes to do), wherein the user intent comprises an intended dimensional view of the extended reality session ([0077] describing dimensional view change; see at least FIGS. 3 and 4 with virtual content determination module 315 and 415 described at least at [0100]-[0101] and further at least at [0118] describing head tilt action or other head movement actions of the user at least at [0180]-[0188] with intentions/wishes at FIG. 8 and [0189]);
dynamically, using a central usage measurement system comprising a quantum computing component (FIGS. 55A-55B and 57, and [0782]-[0793], noting data may be managed using n-ary qubits at [0150]), partition the extended reality session based on the user intent into at least one dimensional view comprising one or more dimensions ([0078] and FIGS. 55A- 57, and [0760-0761, 0782]-[0793] describing usage of wearable appliance, whether worn or not is the dimensional view; alternatively, FIGS. 19 and 23 [0398]-[0400] voice command when looking away providing a different audio output of the weather based on direction of head), wherein the partition of the extended reality session comprises splitting the extended reality session to the one or more dimensions (FIGS. 15-16 and [0305]-[0313] operating system capable of switching tasks and objects accordingly based on inputs);
integrate, using the quantum computing component, the extended reality session with the one or more dimensions in real time (FIGS. 39-48 and direction, location, and attention for producing virtual audio output at [0567]-[0575] and [0623]-[0624], and quantum and artificial intelligence computing disclosed at [0636] and [0137] [0166] and [0137]-[0140] and [0244]), wherein the integration comprises a seamless integration between the one or more dimensions in the extended reality session (FIGS. 39-48 and direction, location, and attention tracking for audio output at [0567]-[0575] and [0602]-[0607] of which various embodiments are provided and [0623]-[0624]; and describing continuous analysis of input data to predict and produce continuous output (e.g., seamless) at [0680]-[0681]);
automatically track the at least one dimensional view for the extended reality session based on the at least one extended reality interaction (see at least [0681] describing storing previous sessions of tracking head movement and introduction of a particular virtual object in relation to the head movement or other input of the user; further see at least FIG. 6 and determination of a duration of a certain duty cycle of that which is displayed based on the user performing an event as disclosed at least at [0212]-[0225] describing user input as an external event that may trigger the change, this information used to further anticipate user data to be presented based on a confidence score as discussed in [0681]; noting audio being tracked at [0623] based on a location and distance from a particular location).
However, although Peleg discloses confidence values for anticipated interactions based on previously detected interactions of the XR system (anticipate user data to be presented based on a confidence score as discussed in [0681] and described at [0084]), Peleg does not explicitly disclose determine, in real time or near real time to an end of the extended reality session, an overall consumption value for the extended reality session based on the at least one dimensional view used during the extended reality session and a computing consumption over a span of time of the extended reality session, and determine, based on the overall consumption value for the extended reality session based on the at least one dimensional view, an estimate for the extended reality session.
In the same field of endeavor, Berliner discloses determine, in real time or near real time to an end of the extended reality session ([0435]-[0439] and [0482] temperature data stored every second or received continuously or any other desire frequency, which constitutes real time determination), an overall consumption value (usage data at [0113] and [0182] and [0429]-[0435] including temperature data) for the extended reality session based on the at least one dimensional view used during the extended reality session (describing peak usage times after gathering temperature and usage data of the head-mounted device [0429]-[0440]) and the computing consumption over a span of time ([0436]-[0440] temperature variation over a period of time being analyzed at [0438] and heat and temperature measurements of a device based on drivers and processors is a known indicator of computing consumption and [0409]-[0425]) of the extended reality session ([0429]-[0440] and [0467]-[0477] and all heat related determinations and heat generation is a measure of computing consumption and the amount of processing power that is consumed), and determine, based on the overall consumption value for the extended reality session based on the at least one dimensional view, an estimate for the extended reality session ([0438]-[0442] describing prediction of heat dissipation and wearing of the device based on usage and time periods in the day to accordingly estimate and adjust the display settings accordingly if certain measurements exceed thresholds; further predicting how the device may be used, predicting content, and predicting heat changes, all forms of estimations at [0429]-[0432]).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the extended reality system with dynamic adjustments of Pegel to incorporate the consumption value storage and prediction value determination as disclosed by Berliner because the references are within the same field of endeavor, namely, virtual reality systems with adaptive learning methods for user interaction and visual production. The motivation to combine these references would have been to improve reliability and usable life of the appliance (see Berliner at [0449]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Regarding claim 2, Pegel in view of Berliner discloses the system of claim 1 (see above), wherein the automatic tracking of the at least one dimensional view comprises tracking the at least one dimensional view at a current time and at least one dimensional view at a future time within the extended reality session (see at least Pegel at [0620]-[0625] describing a one dimensional audio-only engagement with an exhibit and within a range of engagement with the exhibit, or alternatively, time of day based audio selection).
Regarding claim 5, Pegel in view of Berliner discloses the system of claim 1 (see above), wherein the overall consumption value is associated with a computing component consumption further comprising at least one of a power consumption, a memory storage consumption, or a computer processing unit usage (Pegel at [0459]- [0467] and [0470]-[0477] frame rate reduction or reduced video speeds; further describing predictive values determination at least at [0436]-[0442]).
Regarding claim 6, Pegel in view of Berliner discloses the system of claim 1 (see above), wherein the at least one dimensional view comprises at least one of a two-dimensional view, a three-dimensional view, a four-dimensional view, an auditory sound, an image, a video, or a tactile view (see at least Pegel describing switching to audio only when the head has been tilted with FIGS. 19 and 23 wherein the user is facing away from the keyboard and therefore audio only is provided a disclosed at [0398]-[0405]).
Regarding claim 7, Pegel in view of Berliner discloses the system of claim 1 (see above), wherein executing the instructions when executed by the process device, causes the at least one processing device to: identify whether the extended reality session comprises an incognito mode attribute (see at least Pegel describing various operation modes [0349]-[0350] further describing privacy levels at [0103] privately viewable objects at [0484], further describing web browsing at least at [0637] and [0810]; a privacy mode for a web browsing application would be obvious to one of ordinary skill in the art); and in an instance where the extended reality session comprises the incognito mode attribute, dynamically partition the extended reality session and determine the overall consumption value for the extended reality session comprising the incognito mode attribute (see Pegel at least FIGS. 4 and 6 and further describing partitioning/separating the sessions at [0230]-[0236]).
Regarding claim 8, Pegel in view of Berliner discloses the system of claim 1 (see above), wherein executing the instructions when executed by the process device, causes the at least one processing device to: analyze, by a resolution scale model, at least one resolution for the at least one dimensional view of the extended reality session (see at least Pegel at [0155] describing resolution capability in a session and further describing rendering accordingly at [0269] and scale at least at [0139] and [0284] and [0344]); and determine the overall consumption value for the extended reality session based on the at least one resolution for the at least one dimensional view (See Berliner at [0410]-[0412]).
Regarding claim 9, Pegel in view of Berliner discloses the system of claim 8 (see above), wherein the at least one resolution is pre-determined based on a user preference, and wherein the user preference automatically retains the at least one resolution, increases the at least one resolution, or decreases the at least one resolution (see at least Pegel at [0155] and further describing emphasizing and deemphasizing objects at [0269]-[0270].
Regarding claim 10, Pegel in view of Berliner discloses the system of claim 9 (see above), wherein the user preference is dynamically updated based on a trained artificial neural network (ANN) model, and wherein the user preference updated by the ANN model is missing or unknown (see at least Pegel at [0137] and disclosing learning from training examples; additional learning being produced from determining preferences and behaviors of the user at [0206]).
Regarding claim 11, it is similar in scope to claim 1 above, the only difference being claim 11 is directed to a computer program product for dynamically modifying extended reality environments and determining modified usages in real time using quantum computing, the computer program product comprising a non-transitory computer-readable medium comprising code (see at least Pegel at least at FIG. 3 and describing software product and/or data stored on a non-transitory computer readable medium at least at [0096] and neural networks and predictive modeling computing and machine learning algorithms at [0137]) causing an apparatus to perform the steps of claim 1. Therefore, claim 11 is similarly analyzed and rejected as claim 1 above.
Regarding claim 12, it is similar in scope to claim 2 above; therefore, claim 12 is similarly analyzed and rejected as claim 2.
Regarding claim 14, it is similar in scope to claim 5 above; therefore, claim 15 is similarly analyzed and rejected as claim 5.
Regarding claim 15, it is similar in scope to claim 6 above; therefore, claim 15 is similarly analyzed and rejected as claim 6.
Regarding claim 16, it is similar in scope to claim 1 above, the only difference being claim 16 is directed to a computer-implemented method for dynamically modifying extended reality environments and determining modified usages in real time using quantum computing (computer implemented method at least at [0096] and neural networks and predictive modeling computing and machine learning algorithms at [0137]), the computer-implemented method comprising the steps of claim 1 (see above). Therefore, claim 16 is similarly analyzed and rejected as claim 1 above.
Regarding claim 17, it is similar in scope to claim 2 above; therefore, claim 17 is similarly analyzed and rejected as claim 2.
Regarding claim 19, it is similar in scope to claim 5 above; therefore, claim 19 is similarly analyzed and rejected as claim 5.
Regarding claim 20, it is similar in scope to claim 6 above; therefore, claim 20 is similarly analyzed and rejected as claim 6.
Regarding claim 21, Pegel in view of Berliner discloses the system of claim 1 (see above), wherein the overall consumption value is based only on each dimensional view selected in the extended reality session (Berliner at usage data at [0113] and [0182] and [0429]-[0436], noting it would be obvious to track a single usage dimension such as temperature or time or audio as managed by Peleg or Berliner for the commonly understood estimation of battery life (Peleg at [0749]-[0750], Berliner at [0439]).
Regarding claim 22, Pegel in view of Berliner discloses the system of claim 1, generate, by an artificial neural network (ANN) (Peleg at [0137], [0140] and [0145]), missing data for the extended reality session based on one or more user preferences of the user (Peleg anticipated/predicted preferences at [0251] and [0735]).
Regarding claim 23, Pegel in view of Berliner discloses the system of claim 22 (see above), wherein the missing data comprises at least one of a preferred background for the at least one dimensional view (see below, condition satisfied), a lighting preference for the at least one dimensional view (see below, condition satisfied), or a speaker volume for the extended reality session (Pegel with volume control and settings being automatically adjusted at [0554] and [0564]-[0572]).
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Pegel in view of Berliner as applied to claim 1 above, and further in view of Li, US 2022/0122005 A1 (hereinafter “Li”).
Regarding claim 4, Pegel in view of Berliner discloses the system of claim 3 (see above).
However, Pegel in view of Berliner does not explicitly disclose wherein executing the instructions when executed by the process device, causes the at least one processing device to:
identify a mobile resource account associated with a user account of the extended reality session; determine a current resource available for the mobile resource account; compare the current resource available to the estimate for the extended realty session; and auto deduct, from the mobile resource account, the estimate for the extended reality session in an instance where the estimate meets or is less than the current resource available.
In the same field of endeavor, Li discloses wherein executing the instructions when executed by the process device, causes the at least one processing device to:
identify a mobile resource account associated with a user account of the extended reality session (see at least FIG. 19 describing flow diagram for determination of route and estimated route and cost described at least at [0106], further describing VR or AR applications at least at [0135] and also [0182]); determine a current resource available for the mobile resource account (See at least [0106] describing determination of sufficient funds for payment); compare the current resource available to the estimate for the extended realty session (see at least FIG. 19 and describing comparison of needed funds to funds required for destination at least at [0106]); and auto deduct, from the mobile resource account, the estimate for the extended reality session in an instance where the estimate meets or is less than the current resource available (see at least FIG. 19 and further describing charging the card on file for the requested AR experienced trip in view of [0135] and [0182] as would be understood by one of ordinary skill in the art).
Before the effective filing date, it would have been obvious to a person of ordinary skill in the art to modify the augmented reality system and anticipatory method of interaction of Pegel in view of Berliner to incorporate the payment and sufficient funds checking method as disclosed by Li because the references are within the same field of endeavor, namely, user interface systems and devices for interaction in a virtual/extended/augmented reality or space. The motivation to combine these references would have been to improve resolution of payment or funds allocation before expenditure of resources (see Li at least at [0106]-[0107]). Therefore, a person of ordinary skill in the art would have been motivated to combine the prior art to achieve the claimed invention and there would have been a reasonable expectation of success.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Ruelas Ibarra et al., US 2025/0308399 A1;
Grant et al., US 2022/0028356 A1;
Hodge, US 2019/0394282 A1;
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SARVESH J NADKARNI whose telephone number is (571)270-7562. The examiner can normally be reached 8AM-5PM M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, LunYi Lao can be reached at (571) 272-7671. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SARVESH J NADKARNI/Examiner, Art Unit 2621
/LUNYI LAO/Supervisory Patent Examiner, Art Unit 2621