DETAILED ACTION
This Office Action is responsive to the Amendment filed 6 November 2026.
Claims 1-8, 10-13, 21, 25, 29 and 30 are now pending. The Examiner acknowledges
the amendments to claims 3, 4, 6, 29 and 30.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 6 and 25 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention.
At line 2 of claim 6, it is unclear if “plurality of different tasks” is the same as or different than “a plurality of different tasks” recited at lines 7-8 of claim 1. Additionally, “sequence comprising plurality of different tasks” appears to be grammatically incorrect.
Claim 25 claims “One or more non-transitory computer-readable media…”. Based on this preamble, it is unclear if the claims is directed to just one or multiple non-transitory computer-readable media”. Further, in the event of the claim reading on more, the metes and bounds of the claim are indefinite as it cannot be ascertained from the specification what would constitute “or more”.
Claim Rejections - 35 USC § 103
6. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
7. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
8. Claims 1-4, 7, 8, 10-13, 21, 25, 29 and 30 are rejected under 35 U.S.C. 103 as being unpatentable over Ravindran et al. (U.S. Patent No. 10,311,645). Regarding claim 1, Ravindran et al. (hereinafter Ravindran) discloses a method for assessing Autism Spectrum Disorder (ASD) (Abstract; col. 4, lines 50-67), the method comprising:
at a computing device that comprises at least one processor and memory and that is coupled to an extended reality (XR) device (col. 6, lines 48-67 – col. 7, lines 1-15):
receiving data identifying one or more behaviors associated with symptoms of ASD for a subject (col. 7, lines 46-67 – col. 8, lines 1-45);
receiving, based on the one or more behaviors, a scenario specifying a plurality of different tasks (col. 8, lines 53-67: “…select a type of virtual reality or augmented reality content or a treatment program (e.g., selecting the type of virtual environment such as the ‘zoo’ or a ‘train station’”, wherein the plurality of different tasks are configured to train skills associated with the symptoms of ASD (such as attention; speech; eye gaze/contact; or social connections – col. 9, lines 8-67),
causing the XR device to present, based on the scenario, an XR environment (col. 9, lines 24-67 – col. 10, lines 1-15);
causing the XR device to present, in the XR environment and to the subject, a first task of the different tasks specified in the scenario, wherein the first task is configured to train a first skill associated with improvement of at least one of the behaviors associated with the symptoms of ASD, and wherein the first task is configured to prompt the subject to interact with an object in the XR environment (col. 13, lines 38-58 - user evaluated on their ability to respond to pointing or interaction with police, thus tracking their social capability);
detecting an interaction with the object in the XR environment, the interaction being associated with the subject (col. 17, lines 62-67 – col. 18, lines 1-23 and 39-44; does the user converse/interact with the police avatar? Is their response volume too small?);
generating, based on the interaction, interaction data that indicates performance, by the subject, of the one or more behaviors associated with the symptoms of ASD (col. 18, lines 50-56 – “if an audio sensor collects data that the subject has failed to respond to a question by the police office avatar for a certain duration (e.g., 30 seconds, 1 minute, etc.), the teaching module can be programmed to have the police officer respond with “Did you hear me?” In another example, if an audio sensor collects data from the subject's response, the response can be processed via a speech-to-text conversion and compared to a model text answer. If the comparison yields a % similarity above a predetermined threshold, the teaching module can record the conversation session as a successful conversation”);
selecting, from the different tasks, a second one of the different tasks (user in a zoo setting will be presented with an animal model in which they will have to shaft his or her gaze to the noise-making animal model), wherein the second task is configured to train a second skill (successfully shifting gaze) associated with improvement of another one or more of the behaviors associated with the symptoms of ASD (see Fig. 2 and col. 16, lines 1-61) , the second skill (successfully shifting gaze) being different from the first skill (of conversational response); and
modifying, based on the scenario, the XR environment to present the second task (col. 16, lines 42-65).
While Ravindran teaches the step of selecting a second one of the different tasks as indicated above, Ravindran does not explicitly disclose that the step of selecting is based on the interaction data. However, Ravindran makes such obvious as Ravindran teaches that prescription or recommendation of the various therapies and modules available to the VR system can be tailored to the needs of the individual subject either manually by human expert or based on data collected throughout the subject’s use of the VR system (col. 21, lines 1-17; col. 2, lines 9-13) and further that the modules/scenes may target different skills and once a milestone is reached within a module/scene (col. 14, lines 40-65), the module/scenes can be triggered to change (col. 11, lines 3-58; and col. 5, lines 8-26).
Regarding claim 2, the method further comprises receiving, from one or more biometric tracking devices, biometric data that is associated with the subject and collected during performance of the first task, wherein generating the interaction data comprises generating the interaction data further based on the biometric data (col. 18, lines 45-56).
Regarding claim 3, the method further comprises calculating a score associated with the one or more behaviors, wherein the calculation comprises comparing the biometric data to a standard established by the scenario (col. 18, lines 50-56: “the response can be processed via a speech-to-text conversion and compared to a model text answer…[i]f the comparison yields a % similarity above a predetermined threshold, the teaching module can record the conversation session as a successful conversation”). While Ravindran does not explicitly teach that the step of selecting the second task is further based on the score, Ravindran makes such obvious as Ravindran teaches that the score may indicate that the subject performed the task successfully (col. 18, lines 53-56), which would obviate progressing towards working on a second, different task upon completion of the first task successfully.
Regarding claim 4, the method further comprises calculating a score associated with the one or more behaviors, wherein the calculation comprises calculating performance metrics, corresponding to the interaction, that indicate how well the subject performs the one or more behaviors in the XR environment (col. 18, lines 50-56: “the response can be processed via a speech-to-text conversion and compared to a model text answer…[i]f the comparison yields a % similarity above a predetermined threshold, the teaching module can record the conversation session as a successful conversation”). While Ravindran does not explicitly teach that the step of selecting the second task is further based on the score, Ravindran makes such obvious as Ravindran teaches that the score may indicate that the subject performed the task successfully (col. 18, lines 53-56), which would obviate progressing towards working on a second, different task upon completion of the first task successfully.
Regarding claim 7, while Ravindran does not disclose explicitly that the method further comprises: generating, after modifying the XR environment to present the second task, updated interaction data based on further interactions in response to the second task, Ravindran makes such obvious as Ravindran teaches the continuous, real-time collection of data during (and after) operation of the system in order to measure the subject’s progress towards the one or more therapeutic goals (col. 6, lines 33-47; col. 7, lines 46-50; col. 13, lines 1-12; and col. 16, lines 42-67).
Regarding claim 8, the first skill corresponds to one or more of: speech patterns of the subject; eye gaze of the subject; a location of the subject as compared to a location of an avatar object; a decision made in the XR environment; or movement of the subject (col. 17, lines 62-67 – col. 18, lines 1-56: interaction with police scenario).
Regarding claim 29, the method further comprises: collecting eye tracking data by monitoring, using an eye tracking system of the XR device, eye motions of the subject (col. 23, lines 52-62; col. 13, lines 1-36: and generating gaze data by identifying, based on the eye tracking data, one or more second objects in the XR environment which the subject looked at during performance of at least one task of the plurality of different tasks (gazing at different animal models as described with respect to Figs. 2-4); wherein generating interaction data is further based on the gaze data (col. 16, lines 42-67 – col. 17, lines 1-47).
Regarding claim 30, the method further comprises: receiving, via one or more microphones of the XR device, voice data corresponding to vocal interaction, by the subject, with the object (police officer avatar) in the environment (col. 18, lines 24-49); and either calculating, based on the voice data, a confidence score associated with a confidence of the subject when speaking; or calculating, based on the voice data, a clarity score associated with a clarity of speech of the subject (col. 18, lines 50-56: “the response can be processed via a speech-to-text conversion and compared to a model text answer…[i]f the comparison yields a % similarity above a predetermined threshold, the teaching module can record the conversation session as a successful conversation”).
Regarding claim 10, while Ravindran does not disclose explicitly that the method further comprises: generating, after modifying the XR environment to present the second task, updated interaction data based on further interactions in response to the second task, Ravindran makes such obvious as Ravindran teaches the continuous, real-time collection of data during (and after) operation of the system in order to measure the subject’s progress towards the one or more therapeutic goals (col. 6, lines 33-47; col. 7, lines 46-50; col. 13, lines 1-12; and col. 16, lines 42-67).
Regarding claim 11, the gaze data indicates whether the subject looked at a particular region (such as the noise-making animal model) of the one or more second objects (col. 16, lines 42-61).
Regarding claim 12, the one or more second objects comprise an avatar object (animal models are avatars -col. 14, lines 17-20). And while Ravindran does not disclose explicitly that the particular region comprises eyes of the avatar object, Ravindran makes such obvious as Ravindran discloses measurement/tracking of gaze of the subject’s eyes with respect to the avatars presented on the screen (as indicated above), with the teaching of commanding the subject to “Be sure to Look at him” (Fig. 5) and the teaching of the skills being trained by the system “include the training of developmental skills such as joint attention (e.g., eye contact)” (col. 9, lines 47-54).
Regarding claim 13, the gaze data indicates whether the subject looked away from the one or more second objects for a period of time (col. 17, lines 6-29).
Regarding claim 21, Ravindran discloses an apparatus, coupled to an XR device (col. 6, lines 48-67 – col. 7, lines 1-15) for assessing Autism Spectrum Disorder (ASD) (Abstract; col. 4, lines 50-67), the apparatus comprising:
one or more processors; and memory storing instructions that, when executed by the one or more processors (col. 21, lines 20-56; and col. 22, lines 3-67), cause the apparatus to:
receive data identifying one or more behaviors associated with symptoms of ASD for a subject (col. 7, lines 46-67 – col. 8, lines 1-45);
receive, based on the one or more behaviors, a scenario specifying a plurality of different tasks (col. 8, lines 53-67: “…select a type of virtual reality or augmented reality content or a treatment program (e.g., selecting the type of virtual environment such as the ‘zoo’ or a ‘train station’”, wherein the plurality of different tasks are configured to train skills associated with the symptoms of ASD (such as attention; speech; eye gaze/contact; or social connections – col. 9, lines 8-67),
cause the XR device to present, based on the scenario, an XR environment (col. 9, lines 24-67 – col. 10, lines 1-15);
cause the XR device to present, in the XR environment and to the subject, a first task of the different tasks specified in the scenario, wherein the first task is configured to train a first skill associated with improvement of at least one of the behaviors associated with the symptoms of ASD, and wherein the first task is configured to prompt the subject to interact with an object in the XR environment (col. 13, lines 38-58 - user evaluated on their ability to respond to pointing or interaction with police, thus tracking their social capability);
detect an interaction with the object in the XR environment, the interaction being associated with the subject (col. 17, lines 62-67 – col. 18, lines 1-23 and 39-44; does the user converse/interact with the police avatar? Is their response volume too small?);
generate, based on the interaction, interaction data that indicates performance, by the subject, of the one or more behaviors associated with the symptoms of ASD (col. 18, lines 50-56 – “if an audio sensor collects data that the subject has failed to respond to a question by the police office avatar for a certain duration (e.g., 30 seconds, 1 minute, etc.), the teaching module can be programmed to have the police officer respond with “Did you hear me?” In another example, if an audio sensor collects data from the subject's response, the response can be processed via a speech-to-text conversion and compared to a model text answer. If the comparison yields a % similarity above a predetermined threshold, the teaching module can record the conversation session as a successful conversation”);
select, from the different tasks, a second one of the different tasks (user in a zoo setting will be presented with an animal model in which they will have to shaft his or her gaze to the noise-making animal model), wherein the second task is configured to train a second skill (successfully shifting gaze) associated with improvement of another one or more of the behaviors associated with the symptoms of ASD (see Fig. 2 and col. 16, lines 1-61) , the second skill (successfully shifting gaze) being different from the first skill (of conversational response); and
modify, based on the scenario, the XR environment to present the second task (col. 16, lines 42-65).
While Ravindran teaches the step of selecting a second one of the different tasks as indicated above, Ravindran does not explicitly disclose that the step of selecting is based on the interaction data. However, Ravindran makes such obvious as Ravindran teaches that prescription or recommendation of the various therapies and modules available to the VR system can be tailored to the needs of the individual subject either manually by human expert or based on data collected throughout the subject’s use of the VR system (col. 21, lines 1-17; col. 2, lines 9-13) and further that the modules/scenes may target different skills and once a milestone is reached within a module/scene (col. 14, lines 40-65), the module/scenes can be triggered to change (col. 11, lines 3-58; and col. 5, lines 8-26).
Regarding claim 25, Ravindran discloses one or more non-transitory computer-readable media comprising instructions that, when executed by at least one processor of an apparatus (col. 21, lines 20-56; and col. 22, lines 3-67), coupled to an extended reality (XR) device (col. 6, lines 48-67 – col. 7, lines 1-15), for assessing Autism Spectrum Disorder (ASD) (Abstract; col. 4, lines 50-67), cause the apparatus to:
receive data identifying one or more behaviors associated with symptoms of ASD for a subject (col. 7, lines 46-67 – col. 8, lines 1-45);
receive, based on the one or more behaviors, a scenario specifying a plurality of different tasks (col. 8, lines 53-67: “…select a type of virtual reality or augmented reality content or a treatment program (e.g., selecting the type of virtual environment such as the ‘zoo’ or a ‘train station’”, wherein the plurality of different tasks are configured to train skills associated with the symptoms of ASD (such as attention; speech; eye gaze/contact; or social connections – col. 9, lines 8-67),
cause the XR device to present, based on the scenario, an XR environment (col. 9, lines 24-67 – col. 10, lines 1-15);
cause the XR device to present, in the XR environment and to the subject, a first task of the different tasks specified in the scenario, wherein the first task is configured to train a first skill associated with improvement of at least one of the behaviors associated with the symptoms of ASD, and wherein the first task is configured to prompt the subject to interact with an object in the XR environment (col. 13, lines 38-58 - user evaluated on their ability to respond to pointing or interaction with police, thus tracking their social capability);
detect an interaction with the object in the XR environment, the interaction being associated with the subject (col. 17, lines 62-67 – col. 18, lines 1-23 and 39-44; does the user converse/interact with the police avatar? Is their response volume too small?);
generate, based on the interaction, interaction data that indicates performance, by the subject, of the one or more behaviors associated with the symptoms of ASD (col. 18, lines 50-56 – “if an audio sensor collects data that the subject has failed to respond to a question by the police office avatar for a certain duration (e.g., 30 seconds, 1 minute, etc.), the teaching module can be programmed to have the police officer respond with “Did you hear me?” In another example, if an audio sensor collects data from the subject's response, the response can be processed via a speech-to-text conversion and compared to a model text answer. If the comparison yields a % similarity above a predetermined threshold, the teaching module can record the conversation session as a successful conversation”);
select, from the different tasks, a second one of the different tasks (user in a zoo setting will be presented with an animal model in which they will have to shaft his or her gaze to the noise-making animal model), wherein the second task is configured to train a second skill (successfully shifting gaze) associated with improvement of another one or more of the behaviors associated with the symptoms of ASD (see Fig. 2 and col. 16, lines 1-61) , the second skill (successfully shifting gaze) being different from the first skill (of conversational response); and
modify, based on the scenario, the XR environment to present the second task (col. 16, lines 42-65).
While Ravindran teaches the step of selecting a second one of the different tasks as indicated above, Ravindran does not explicitly disclose that the step of selecting is based on the interaction data. However, Ravindran makes such obvious as Ravindran teaches that prescription or recommendation of the various therapies and modules available to the VR system can be tailored to the needs of the individual subject either manually by human expert or based on data collected throughout the subject’s use of the VR system (col. 21, lines 1-17; col. 2, lines 9-13) and further that the modules/scenes may target different skills and once a milestone is reached within a module/scene (col. 14, lines 40-65), the module/scenes can be triggered to change (col. 11, lines 3-58; and col. 5, lines 8-26).
Allowable Subject Matter
9. Claim 5 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Regarding claim 5, while the prior art teaches a method for assessing Autism Spectrum Disorder (ASD), the method comprising: at a computing device that comprises at least one processor and memory and that is coupled to an extended reality (XR) device: receiving data identifying one or more behaviors associated with symptoms of ASD for a subject; receiving, based on the one or more behaviors, a scenario specifying a plurality of different tasks, wherein the plurality of different tasks are configured to train skills associated with the symptoms of ASD; causing the XR device to present, based on the scenario, an XR environment; causing the XR device to present, in the XR environment and to the subject, a first task of the different tasks specified in the scenario, wherein the first task is configured to train a first skill associated with improvement of at least one of the behaviors associated with the symptoms of ASD, and wherein the first task is configured to prompt the subject to interact with an object in the XR environment; detecting an interaction with the object in the XR environment, the interaction being associated with the subject; generating, based on the interaction, interaction data that indicates performance, by the subject, of the one or more behaviors associated with the symptoms of ASD; selecting, from the different tasks and based on the interaction data, a second one of the different tasks, wherein the second task is configured to train a second skill associated with improvement of another one or more of the behaviors associated with the symptoms of ASD, the second skill being different from the first skill; and modifying, based on the scenario, the XR environment to present the second task, the prior art of record does not teach or fairly suggest a method for assessing Autism Spectrum Disorder (ASD) as claimed by Applicant, wherein selecting the second task is responsive to the interaction data indicating that the user is unresponsive to the first task, and wherein modifying the XR environment comprises presenting the second task without human interaction.
10. Claim 6 would be allowable if rewritten to overcome the rejection(s) under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), 2nd paragraph, set forth in this Office action and to include all of the limitations of the base claim and any intervening claims.
Response to Arguments
11. Applicant’s arguments filed 6 November 2025 with respect to the rejection of claims 3, 4, 10-13, 29 and 30 under 35 U.S.C. 112(b) have been fully considered and are persuasive in light of the amendments, however new grounds of rejection are presented above.
12. Applicant’s arguments filed 6 November 2025 with respect to the rejection of claims 1-8, 10-13, 21, 25, 29 and 30 under 35 U.S.C. 102(a)(1) citing Sahin (‘033) have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made under 35 U.S.C. 103 citing Ravindran (‘645); see rejection supra.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTINE HOPKINS MATTHEWS whose telephone number is (571)272-9058. The examiner can normally be reached Monday - Friday, 7:30 am - 4:00 pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Charles A Marmor, II can be reached at (571) 272-4730. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTINE H MATTHEWS/Primary Examiner, Art Unit 3791