DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
This office action is in responsive to communication(s):
Application filed on 8/30/2023 with effective filing date of 8/30/2022 based on provisional application 63/373908.
The status of the claims is summarized as below:
Claims 1-20 are pending.
Claims 1, 8, and 15 are independent claims.
Specification
The disclosure is objected to because of the following informalities:
“910” from “second step 910 described with respect to FIG. 8” is recited in paragraph ¶0140. It appears that the applicant intended to refer to “810” as there is no second step marked as 910 in Fig. 8.
“110” from “enables the electronic device 110 to send and receive data” is recited in paragraph ¶0157. It appears that the applicant intended to refer to “electronic device 1110” based on the context of the paragraph; label “110” from Fig. 1 was referred elsewhere (i.e. ¶0040) as “virtual assistant engine 110”.
Appropriate correction is required.
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
Claim Rejections - 35 USC § 102
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention.
Claims 1-2, 4-9, 11-16, 18-20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Ma et al. (US Pub 20210248490, hereinafter Ma).
Per claim 1, Ma teaches:
An extended reality system comprising: ([0098-0099] Fig. 1-1A a client device 100 could be XR capable devices such as smart glasses or headsets);
a head-mounted device comprising a display that displays content to a user and one or more cameras that capture images of a visual field of the user wearing the head-mounted device; ([0099, 0104] Fig. 1A: client device 100 (XR devices) include a display 112 that displays real world spaces data captured by camera 114 that may also have augmented reality features);
a processing system; and ([0106] Fig. 1A shows processor 102);
at least one memory storing instructions that, when executed by the processing system, cause the extended reality system to perform operations comprising: ([0106] Fig. 1A shows memory 108);
accessing data collected from user interactions while using a context aware policy in an extended reality environment, the context aware policy defining an action to be triggered upon satisfaction of one or more conditions within the extended reality; ([0179-0180] Fig. 6.5 shows associations between usage/context data and user responses for each relevant activity context is created after the “Offline Context Analyzer Training” and “Offline Response Analyzer Training”; the combination of a specific set of activity context, user response, and GUI adaptation (action) is considered context aware policy; while new contextual motion data is accessed in the “online prediction” phase of Fig. 6.5, trained ML model predict activity context and user response based on the known context aware policy, which includes a interface adaptation based on the predicted user response (see step 304-306 of Fig. 3 [0143-0149]); also see Fig. 3 and Fig. 31-33 and associated paragraphs);
determining a support set for the context aware policy based on the data, wherein the support set is a subset of the data where the context aware policy has been correct as determined by the user interactions while using the context aware policy; ([0141] specific user intentions and context types are used as training data sets to train the machine learning system; [0150-0151] if the user completes the intention by using one of the GUI CTA (call to action) components, then the GUI adaptation is considered a success; if not, then the GUI change is considered a failure; the success cases are the support set from the specific context aware policy comprising user activity context, user predicted intention/response, and GUI adaption data set; i.e. Fig. 23 shows a set of context aware policy with AR interface with implicit user response of glancing at the interface [0213-0214]);
determining a confidence score for the context aware policy based on the data, wherein the confidence score is a measure of certainty that the one or more conditions will lead to a correct action for the user as determined by the user interactions while using the context aware policy; ([0180, 0182, 0214] Fig. 6.5 shows at the beginning of the “Testing Model” phase, a predetermined performance metric(confidence score) is determined based on the predicted user response from the existing usage/context data in the “online prediction” phase for this particular activity context/user response/a interface (context aware policy), which is further expanded in Fig. 6.7; [0215] success of user response (correct action) is determined with a set of metrics with different directions; the examiner note correct action is interpreted broadly to include implicit user action such as glancing at screen, and may include a set one or more metrics with a specific direction for evaluating success, such as shorter duration of the glance);
generating a set of replacement policies for the context aware policy, wherein each replacement policy of the set of replacement policies defines a modified version of the one or more conditions or the action from the context aware policy; ([0180, 0183, 0220] Fig. 6.5 show as part of the “Testing Model” phase, a set of variations of the default interface are exposed to the user to evaluate performance metric for each variation based on the predicted user response; a specific combination of context/predicted user response/variation of UI is interpreted as a replacement policy, thus a modified version/variation of UI would comprise of a different replacement policy; Fig. 6.8 further expand on the “Testing Model” phase of evaluation of different variation of UI);
determining a support set and confidence score for each replacement policy of the set of replacement policies based on the data; ([0221, 0224] Fig. 27 shows the performance metrics (confidence score) for each variation of UI, based on the usage data shown in Fig. 28, such as glancing duration of a user; [0224] the support set for each variation could be the same if the interface includes the same components with merely appearance changes such as shown in Fig. 28, as the user response of glancing at the screen remains the same for all three variations);
identifying a replacement policy from the set of replacement policies as a replacement for the context aware policy when: (i) the support set of the context aware policy is a subset of the support set for the replacement policy, and (ii) the confidence score of the replacement policy is greater than the confidence score of the context aware policy; and ([0180, 0224] Fig. 6.5 shows at last step, the variation with the highest reward (confidence score is higher than original) is selected to be always presented to the user – replacing the original policy, i.e. activity context/user response/interface policy; and the support set of the selected variable is the same (a subset include the set itself) as the original interface, since variation of the interfaces always result in the predicted user response, i.e. glancing at interface from Fig. 28);
updating the one or more conditions or the action defined by the context aware policy with the modified version of the one or more conditions or the action defined by the replacement policy to generate an updated context aware policy ([0183] Fig. 6.8 shows variations of interface (modified action in replacement policies) are evaluated for performance metrics and translated reward, and the variation of interface with the highest reward is deemed to the most desirable interface, and served to the user in the future for the same user response based on the corresponding contexts(context) in place of the original default interface).
Per claim 2, Ma teaches all the limitations of claim 1, and further teaches:
wherein the operations further comprise executing the updated context aware policy, and wherein executing the updated context aware policy comprises: ([0183] the interface variation with the highest reward is presented(executed) in the future with the same predicted user response and activity context)
determining that the one or more conditions defined by the updated context aware policy have been satisfied and, in response to determining the one or more conditions have been satisfied, executing the action defined by the updated context aware policy. ([0182-0183] Fig. 6.5, Fig. 6.8: when the same predicted activity context and user response (same conditions defined in the policy) is encountered in the future, the ML model would present the variation interface with the highest reward to serve to the user; ).
Per claim 4, Ma teaches all the limitations of claim 1, and further teaches:
wherein the operations further comprise:
identifying one or more relationships between the context aware policy and the replacement policy, wherein the identifying comprises comparing the context aware policy and the replacement policy and determining one or more modifications made to the one or more conditions or the action defined by the context aware policy to generate the modified version of the one or more conditions or the action defined by the replacement policy; ([0180-0183] the “Testing Model” phase serves different variations of user interface, where each one differs in the interface components or appearance(modified action), where the evaluated performance metrics and translated reward of one interface variation is sent as feedback to the ML to help to serve the next variation);
generating one or more refinement suggestions for the context aware policy based on the one or more relationships identified between the context aware policy and the replacement policy; and ([0183] the performance metrics and translated reward is fed back into the ML to help to come up with the next variation of interface; i.e. [0224-0226] Fig. 28-29 show different interface variations with different refinement);
generating a user interface comprising the one or more refinement suggestions within the extended reality environment that is displayed as content on the display. ([0183] the performance metrics and translated reward is fed back into the ML to help to come up with the next variation of interface; i.e. 0224-0226] Fig. 28-29 show different interface variations with different refinement that are displayed to the user to get additional feedback).
Per claim 5, Ma teaches all the limitations of claim 4, and further teaches:
wherein the operations further comprise receiving input from the user interacting with the user interface, wherein the input is a selection of at least one of one or more refinement suggestions for the context aware policy, and wherein the one or more conditions or the action defined by the context aware policy are updated with the modified version of the one or more conditions or the action defined by the replacement policy to generate the updated context aware policy based on the selection of the at least one of one or more refinement suggestions. ([0224-0226] Fig. 28-29 show different variation of interface, where each receive user response; Fig. 29 receives different user input on the changed component of the interface, such as bigger button, where the user response/input feedback received are used as basis for performance metrics and reward evaluation, which result in selection of the interface with the highest reward).
Per claim 6, Ma teaches all the limitations of claim 4, and further teaches:
The extended reality system of claim 4,
wherein the one or more refinement suggestions comprise changing the action, changing at least one of the one or more conditions, adding an “or” condition to the one or more conditions, adding a condition to the one or more conditions, or removing a condition from the one or more conditions. ([0224-0226] Fig. 28-29 show the interface for each variation was changed (changing action) for the interface refinement suggestion).
Per claim 7, Ma teaches all the limitations of claim 1, and further teaches:
wherein the operations further comprise:
executing the context aware policy and collecting the data from the user interactions while using the context aware policy in the extended reality environment. ([0180, 0183] Fig. 6.5 shows in the “Testing Model” phase, each interface variation including the default interface is exposed to the user while collecting feedback data from the user interface to further evaluate each interface; also see Fig. 31 and associated paragraphs);
Per claim 8, claim 8 is a method claim that include limitations that are substantially the same as claim 1, and is likewise rejected.
Per claim 9, 11-14, claims 9, 11-14 include limitations that are substantially the same as claim 2, 4-7 respectively, and are likewise rejected.
Per claim 15, claim 15 is a medium (Fig. 1A memory 108) claim that include limitations that are substantially the same as claim 1, and is likewise rejected.
Per claim 16, 18-20, claims 16, 18-20 include limitations that are substantially the same as claim 2, 4-6 respectively, and are likewise rejected.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 3, 10, 17 are rejected under 35 U.S.C. 103 as being as being unpatentable over Ma, in view of Subramanya et al. (US Pub 20210142189, hereinafter Subramanya).
Per claim 3, Ma teaches all the limitations of claim 1, but does not explicitly teach data from user interaction comprises a sentiment of the user: “wherein the data collected from the user interactions comprises an indication of a sentiment of the user towards the action defined by the context aware policy”.
However, Subramanya teaches a method of predicting user intent based on collected context data:
wherein the data collected from the user interactions comprises an indication of a sentiment of the user towards the action defined by the context aware policy ([0043, 0070] context data such as user sentiment during a conversation is collected).
Subramanya and Ma are analogous art because they both teaching methods sharing documents in collaborative applications. Therefore, it would have been obvious to one of ordinary skills in art before the effective filing date, having the teachings of Ma and Subramanya before him/her, to modify the teachings of Ma to include the teachings of Subramanya so that part of the data collection for user interaction includes user sentiment. One would be motivated to make the combination, with a reasonable expectation of success, because it would further help to improve identification of user intents which would help to better predict user response ([0070]).
Per claim 10, 17, claims 10, 17 each include limitations that are substantially the same as claim 3, and are likewise rejected.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patents
US 20190373101 A1
DOTAN-COHEN; Dikla et al.
Computerized system for providing visual representations of predicted user event patterns, has processors for causing display of user event patterns, where user event patterns are visually represented as sequence of events
Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action.
The examiner requests, in response to this Office action, support by shown for language added to any original claims on amendment and any new claims. That is, indicate support for newly added claim language by specifically pointing to page(s) and line no(s) in the specification and/or drawing figure(s). This will assist the examiner in prosecuting the application.
When responding to this office action, Applicant is advised to clearly point out the patentable novelty which he or she thinks the claims present, in view of the state of the art disclosed by the references cited or the objections made. He or she must also show how the amendments avoid such references or objections, See 37 CFR 1.111(c).
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to PHOEBE X PAN whose telephone number is (571)270-7794. The examiner can normally be reached M-F 9am-6pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Fred Ehichioya can be reached at (571) 272-4034. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/PHOEBE X PAN/Examiner, Art Unit 2179
/IRETE F EHICHIOYA/Supervisory Patent Examiner, Art Unit 2179