Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This non-final office action on merits is in response to the Patent Application filed on 10/17/2024.
Status of claims
Claims 1-17 are pending and considered below. This application claims the benefit of U.S. Provisional Application Number 63591239 filed on 10/18/2023.
Information Disclosure Statement
The information disclosure statement (IDS) filed on 10/17/2024 has been acknowledged. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Interpretation
The following is a quotation of 35 U.S.C. 112(f):
(f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph:
An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof.
The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked.
As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph:
(A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function;
(B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and
(C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function.
Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function.
Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function.
Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action.
This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a content module” in claims 1-17, and “a healthcare content module” in claim 9.
Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof.
If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph.
Claim Rejections - 35 USC § 101
35 U.S.C. 101 reads as follows:
Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title.
Claims 1-17 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea without significantly more.
Step 1
Under step 1, the analysis is based on MPEP 2106.03, and claims 1-13 are drawn to a method and claims 14-17 are drawn to a system. Thus, each claim, on its face, is directed to one of the statutory categories (i.e., useful process, machine, manufacture, or composition of matter) of 35 U.S.C. §101.
Step 2A Prong One
Claim 1 recites the limitation of determining a content module to be played on a user device and tracking a user’s progress on viewing the content module. This limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind or by using a pen and paper. But for the “the user device” language, the claim encompasses a user simply selecting an instructional topic and monitoring or noting viewing progress in their mind or by using a pen and paper. The mere nominal recitation of the user device does not take the claim limitation out of the mental processes grouping. Thus, the claim recites a mental process which is an abstract idea.
Claim 6 recites the limitation of extracting, one or more content elements based on the subject of the content module; and generating the content module from the one or more content elements. This limitation, as drafted, is a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind or by using a pen and paper. But for the “from the elements library” language, the claim encompasses a user simply selecting relevant educational materials and assembling them into a lesson or presentation in their mind or by using a pen and paper. The mere nominal recitation of from the elements library does not take the claim limitation out of the mental processes grouping. Thus, the claim recites a mental process which is an abstract idea.
Independent claim 14 recites identical or nearly identical steps with respect to claim 6 (and therefore also recite limitations that fall within this subject matter grouping of abstract ideas), and this claim is therefore determined to recite an abstract idea under the same analysis.
Under Step 2A Prong Two
The claimed limitations, as per method claim 1, include:
determining a content module to be played on a user device;
downloading the content module to the user device;
executing the content module on the user device; and
tracking a user’s progress on viewing the content module on the user device.
The claimed limitations, as per method claim 6, include:
storing content elements in an elements library;
receiving a request for a content module, the request comprising an identification of a subject of the content module;
extracting, from the elements library, one or more content elements based on the subject of the content module;
generating the content module from the one or more content elements; and
storing the content module in a computer-readable storage medium.
Examiner Note: underlined elements indicate additional elements of the claimed invention identified as performing the steps of the claimed invention.
The judicial exception expressed in claim 1 is not integrated into a practical application. The claim as a whole merely describes how to generally “apply” the concept of delivering or tracking educational or interactive content in a computer environment. The claimed computer components (i.e., executing the content module on the user device and the user device) are recited at a high level of generality and are merely invoked as tools to perform an existing process of selecting, presenting, and monitoring a user’s engagement with instructional material. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application.
The judicial exception expressed in claim 1 is not integrated into a practical application. The claim recites the additional element of downloading the content module to the user device. This limitation is recited at a high level of generality (i.e., as a general means of transmitting data), and amounts to merely data gathering, which is a form of insignificant extra-solution activity. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea.
The judicial exception expressed in claim 6 is not integrated into a practical application. The claim as a whole merely describes how to generally “apply” the concept of selecting, assembling, and organizing educational or informational content in a computer environment. The claimed computer components (i.e., storing content elements in an elements library, from the elements library, and storing the content module in a computer-readable storage medium) are recited at a high level of generality and are merely invoked as tools to perform an existing process of arranging and saving instructional material. Simply implementing the abstract idea on a generic computer is not a practical application of the abstract idea. Accordingly, alone and in combination, these additional elements do not integrate the abstract idea into a practical application.
The judicial exception expressed in claim 6 is not integrated into a practical application. The claim recites the additional element of receiving a request for a content module, the request comprising an identification of a subject of the content module. This limitation is recited at a high level of generality (i.e., as a general means of receiving user input or instructions), and amounts to merely data gathering, which is a form of insignificant extra-solution activity. Accordingly, even in combination, these additional elements do not integrate the abstract idea into a practical application. The claim is directed to an abstract idea.
Therefore, under step 2A, the claims are directed to the abstract idea, and require further analysis under Step 2B.
Under step 2B
Claim 1 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A, the claim as a whole merely describes how to generally “apply” the concept of delivering or tracking educational or interactive content in a computer environment. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea.
Claim 1 does not include an additional element that are sufficient to amount to significantly more than the judicial exception. For the providing limitation that was considered extra-solution activity in Step 2A, this has been re-evaluated in Step 2B and determined to be well-understood, routine, conventional activity in the field. The specification does not provide any indication that the limitation of transmitting data is anything other than a conventional action that simply before selecting and presenting educational content to a user (see page 22, lines 19-22). For these reasons, there is no inventive concept. The claim is not patent eligible.
Claim 6 does not include additional elements that are sufficient to amount to significantly more than the judicial exception. As discussed with respect to Step 2A, the claim as a whole merely describes how to generally “apply” the concept of selecting, assembling, and organizing educational or informational content in a computer environment. Thus, even when viewed as a whole, nothing in the claim adds significantly more (i.e., an inventive concept) to the abstract idea.
Claim 6 does not include an additional element that are sufficient to amount to significantly more than the judicial exception. For the providing limitation that was considered extra-solution activity in Step 2A, this has been re-evaluated in Step 2B and determined to be well-understood, routine, conventional activity in the field. The specification does not provide any indication that the limitation of receiving user input or instructions is anything other than a conventional action that simply comes before selecting and assembling educational content (see page 21, lines 18-24). For these reasons, there is no inventive concept. The claim is not patent eligible.
Claims 2-5, 7-13, and 15-17 recite the additional element of the content module (2-5, 7-13, and 15-17), and a healthcare content module (claim 9). However, this additional element amounts to implementing an abstract idea on a generic computing device. As such, this additional element, when considered individually or in combination with the prior devices, does not integrate the abstract idea into a practical application or amount to significantly more than the abstract idea.
Thus, as the dependent claims remain directed to a judicial exception, and as the additional elements of the claims do not amount to significantly more, the dependent claims are not patent eligible.
Therefore, the claims here fail to contain any additional element(s) or combination of additional elements that can be considered as significantly more and the claim is rejected under 35 U.S.C. 101 for lacking eligible subject matter.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claims 1-17 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Morgan et al. (U.S. Patent No. 11961197 B1), referred to hereinafter as Morgan.
Claim 1: Morgan teaches a method of delivering interactive content (Morgan, Col. 96, lines 54-60, “A XR mental health treatment feature of the exemplary Mental Health Module may be used to select, prioritize, and deliver appropriate assessment, diagnostic, therapeutic and/or health-related scenes. This feature may also address cognitive distortions with XR scenes and/or features within XR scenes (such as virtual objects, interactive virtual human avatars, virtual environment features, and the like).”), comprising:
determining a content module to be played on a user device (Morgan, Col 25, lines 24-57, “In one exemplary embodiment of the present Q&A feature, users can answer questionnaires that are assigned to him through a web portal. User specific surveys can be retrieved via API using a logged in user's current authentication token as structured JSON. Default survey JSON can also be loaded if the XR Health Platform is configured for offline use or if the device is offline. A survey's JSON may comprise one or more of an ID, a user ID, title, and a collection of questions. The questions may comprise: an ID, the question itself which is referred to as a title, and a format. Exemplary formats include: slider wherein the response can be a configurable range of numbers; dropdown wherein the response can be a single choice from a dropdown list of options; single response selection wherein the response can be a single choice from a list of options; multi response selection wherein the response can be multiple choice from a list of options; single response grid wherein the response can be one selection each for a subset of questions; and multi response grid wherein the response can be multiple selections each for a subset of questions. A slider's responses will have the range of numbers. Dropdown, single choice selection, and multi choice selection will have a collection of individual choices. Single choice grid and multi choice grid will have a collection of potential responses to each individual sub-question. Single choice and multi choice grid format questions will have a collection of sub-questions. Once a user's assigned questionnaire is retrieved from the API, the JSON is then parsed and displayed to the user in XR for answering. The parser checks the format of each question object, then procedurally generates the appropriate UI Canvas with the question at the top, the ability to respond in the center, and a button to move onto the next question at the bottom”);
downloading the content module to the user device (Morgan, Col. 11, lines 24-59, “In other embodiments, the present XR Health Platform may utilize other computer networks; for example, a wide area network (WAN), local area network (LAN), or intranet. The host server may comprise a processor and a computer readable medium, such as random access memory (RAM). The processor is operable to execute certain programs for performing the present XR Health Platform and other computer program instructions stored in memory. Such processor may comprise a microprocessor (or any other processor) and may also include, for example, a display device, internal and external data storage devices, cursor control devices, and/or any combination of these components, or any number of different components, peripherals, input and output devices, and other devices. Such processors may also communicate with other computer-readable media that store computer program instructions, such that when the stored instructions are executed by the processor, the processor performs the acts described further herein. Those skilled in the art will also recognize that the exemplary environments described herein are not intended to limit application of the present XR Health Platform, and that alternative environments may be used without departing from the scope of the invention. Various problem-solving programs incorporated into the present XR Health Platform and discussed further herein, may utilize as inputs, data from a data storage device or location. In one embodiment, the data storage device comprises an electronic database. In other embodiments, the data storage device may comprise an electronic file, disk, or other data storage medium. The data storage device may store features of the disclosure applicable for performing the present XR Health Platform. The data storage device may also include other items useful to carry out the functions of the present XR Health Platform. In one example, the exemplary computer programs may further comprise algorithms designed and configured to perform the present XR Health Platform.”);
executing the content module on the user device (Morgan, Col. 24, lines 15-30, “In one embodiment, once a “Start Trial” button is activated using patient input methods, one or more points of platform data are recorded for later analysis. These data may include points of the following: positional tracking data, biometric data, video data recorded from cameras, audio data recorded from cameras, audio data recorded from microphones, audio data recorded from other objects with audio recording functionalities within XR, points of data produced by actions and/or behaviors of clinicians and/or patients while in XR, responses to other questions, other items from the Q&A feature, other platform features, and other points of platform data. Other points of platform data may comprise items of content and/or features within XR. These items may be instantiated, modified, initialized, displayed, stopped and/or destroyed, and these functionalities may have in-scene buttons to control such functionalities.”); and
tracking a user’s progress on viewing the content module on the user device (Morgan, Col. 97, lines 51-65, “Other exemplary scenes and/or features may include XR scene(s) and/or features within XR scenes where past accomplishment(s), achieved goal(s), and/or positive progress are highlighted. The past accomplishment(s), achieved goal(s), and/or positive progress may be highlighted using sentences/statements, snippets, content objects, audio items, video items, 2D or 3D effects, animations, and/or replays of previously experienced scenes or sessions showing an avatar representing the patient to him/herself in 3rd person. Other exemplary scenes and/or features may include application of Socratic questioning/dialogue to identify negative thought patterns and/or cognitive distortions, and/or to attenuate negative or counterproductive thoughts or feelings, and/or to enhance positive or productive thoughts or feelings as described above.”).
Claim 2: Morgan teaches the invention of claim 1, as discussed above, and further teaches the method further comprising: compiling feedback collected based on user interaction with the content module executing the user device (Morgan, Col. 55, lines 18-34, “A real-time movement biofeedback feature of the exemplary Movement Module may be used for real-time biofeedback where the scoring in one or more gamified XR experiences is at least partially determined by establishing and/or maintaining physical movements and/or positions. The movements and/or positions are determined as described herein, and whereby visual and/or auditory stimuli provide real-time feedback in terms of the correctness or incorrectness of the movements and/or physical positions. This correctness or incorrectness results in a higher or lower score, respectively. In one embodiment of this feature, scores are proportional to the angle and height of the controllers relative to the HMD as well as the length of time that this position is maintained. For example, if the patient holds his arms straight out in front of him at eye level, the score goes up proportionally to the time that this position is maintained.”).
Claim 3: Morgan teaches the invention of claim 1, as discussed above, and further teaches the method further comprising: customizing the content module based on a medical history of an intended user of the content module (Morgan, Col. 86, lines 24-53, “An Alzheimer's and dementia feature of the exemplary Neurological Module may be used for detecting and/or screening for Alzheimer's disease and/or dementia using ML/AI models, voice and/or vocal biomarker analysis features, facial tracking, facial computer vision analyses, pupil and/or eye tracking, positional tracking, the Q&A feature, Movement Module features, Mental Health Module features, Clinical Platform Module features, Neurological Module features, other platform features, and/or using other platform data points. In one exemplary embodiment, this exemplary feature comprises neurocognitive assessments wherein features of the Neurological Module and/or as described in the Clinical Platform Module may be performed (either intentionally and/or passively), and patient inputs and/or actions are logged and stored in a database. In another exemplary embodiment, the present feature comprises movement assessments wherein features of the Neurological Module and/or Movement Module and/or as described in the Clinical Platform Module may be performed (either intentionally and/or passively). Patient inputs and/or actions are logged and stored in a database. Any the logged datasets from assessments are compared to repeat assessments and/or population normal values (customized for the age and gender of the patient) by clinicians and/or ML/AI models. According to one exemplary embodiment, this comparison is implemented to identify if patient has had significant changes and/or remarkable results indicative of Alzheimer's and/or dementia in cognitive domains, movement assessments, and/or other neurological characteristics and/or features as described herein.”).
Claim 4: Morgan teaches the invention of claim 1, as discussed above, and further teaches wherein determining the content module comprises: generating a visual representation of an intended user of the content module based on one or more images of the intended user (Morgan, Col. 57, lines 44-55, “The exemplary movement integration feature may also comprise variations combined with the telecommunication module for real-time voice, and/or video, and/or text interactions between the patient in XR and clinicians using a companion application, and/or web portal, and/or in XR. The exemplary movement integration feature may also comprise variations where avatar silhouettes are used for producing visual biofeedback for the patient, and where the color, size, texture, and/or shader on the avatar silhouette may be used to indicate the level of correctness or incorrectness of physical movements being performed by the patient.”); and
integrating the visual representation of the intended user into the content module (Morgan, Col. 3, lines 31-45, “Virtual human avatar” refers to a humanoid virtual avatar which may be animated, simulated, programmatically controlled (using ML/AI models, for example), and/or represented through other types of rendered content and/or other media, and is designed to interact with, educate, instruct, demonstrate, advise, assist, guide, escort, diagnose, screen, test, treat, and/or manage disease(s) and/or health-related issues for patients in XR. Virtual human avatars may interact with patients and/or clinicians through spoken dialogue, text, rendered content, through visual means, and/or through any other method of communication. Virtual human avatars may possess characteristics that are virtual approximations and/or facsimiles of characteristics of real-world clinicians and/or patients. When used in this context, the term “virtual human avatar(s)” is synonymous with “digital twin(s)”).
Claim 5: Morgan teaches the invention of claim 4, as discussed above, and further teaches wherein determining the content module comprises: modifying the visual representation of the intended user based on a progress of one or more health conditions of the intended user (Morgan, Col. 27, lines 2-25, “The exemplary Clinical Platform Module may further comprise a ML/AI to influence patient behavior feature which uses ML/AI models combined with points of platform data to influence a patient's and/or clinician's behavior and/or influence the patient to carry out desirable actions in XR by creating, deriving, configuring, triggering, modifying, deploying and/or controlling platform content and/or by utilizing other platform features. Points of platform data for a given patient are used as inputs for ML/AI models and/or one or more other platform features. Code and/or configuration instructions may be used to programmatically or otherwise modify, configure, instantiate, and/or control “non-player characters”, virtual human avatars, content and/or features, objects, and/or other features within scenes, sessions and/or regimens. The exemplary feature may use specific measurable and desirable platform actions and/or series of desirable and measurable platform actions over time (“platform behaviors”). For the purpose of this feature, “desirable actions” above also includes mitigating, decreasing, and/or eliminating undesirable actions (for example, decreasing the amount or frequency of cigarette smoking). Inputs or outputs for other ML/AI models, and/or inputs or outputs for one or more iterations of the same ML/AI model(s) may also be utilized within this feature.”).
Claim 6: Morgan teaches a method of delivering interactive content (Morgan, Col. 96, lines 54-60, “A XR mental health treatment feature of the exemplary Mental Health Module may be used to select, prioritize, and deliver appropriate assessment, diagnostic, therapeutic and/or health-related scenes. This feature may also address cognitive distortions with XR scenes and/or features within XR scenes (such as virtual objects, interactive virtual human avatars, virtual environment features, and the like).”), comprising:
storing content elements in an elements library (Morgan, Col. 26, lines 30-67, “A camera is used to take photographs, images, and/or videos of a patient and/or relating to the health of patients. Using the “measuring tape” (as described in the hardware section below) and/or using one or more other measurement scales and/or methods of evaluation, one or more of the following are obtained for an individual: height, waist circumference, hip circumference, bust circumference, thigh circumference, calf circumference, neck circumference, mid-brachial circumference, and knee-to-heel length. Using a scale and/or the Q&A feature, the weight of an individual is obtained. Mask and/or instance segmentation computer vision models and/or one or more other ML/AI models are applied to photographs, images, platform data, and/or videos of the patient and/or frames extracted from a video of a patient. The photo, image, platform data, and/or frame outputs of the model are modified versions of the input photo, image, platform data, and/or frame with pixels belonging to an area, characteristic, volume, and/or other measurement of the patient being delineated. The area, volume, and/or other measurement of the patient is estimated and/or calculated from a set of the model outputs either by themselves or when combined with other points of platform data and/or other platform features. The morphology, shape, body habitus, postural data, and/or points of anthropometric data are derived using other ML/AI models from estimates and/or calculations of area, volume, and/or other measurement related to the patient. Different ML/AI models may be applied to the photos, images, points of platform data, and/or frames to determine and/or validate points of derived data, morphological data, postural data, anthropometric data, and/or other points of platform data. Data points obtained through this feature may be recorded and stored in a database. Points of the derived data may be combined with other points of platform data, which may then be analyzed by ML/AI models and/or one or more other platform features to derive therapeutic, diagnostic, prognostic, and/or disease risk prediction data relating to diseases and/or disease-related outcomes.”);
receiving a request for a content module, the request comprising an identification of a subject of the content module (Morgan, Col. 69, lines 47-60, “In another exemplary embodiment, the present module uses an assessment of cranial nerve 3, 4, 6. This assessment may be completed using the ‘extraocular muscle test’ mentioned herein and/or visual field test mentioned herein, either with or without gaze and/or eye tracking; and/or through direct interactions with a clinician via the communications feature; and/or after auditory, visual, and/or text-based requests to perform actions relating to visual fields is delivered to the patient, with the subsequent patient actions being assessed using ML/AI models in combination with points of data provided by cameras, and/or, through the use of other points of platform data, and/or using eye and/or gaze tracking as described herein.”);
extracting, from the elements library, one or more content elements based on the subject of the content module (Morgan, Col. 38, lines 23-37, “In other exemplary embodiments, the feature further comprises a sub-feature for selecting from already created snippets and/or content objects. Clinicians and/or ML/AI models search for any available instructional, educational, diagnostic, feedback, and/or therapeutic snippets and/or content objects. Searching is accomplished either via a search function and/or by going through a list of all tag entries via a dropdown menu, scrollable element, search box, and/or other methods of querying, searching and/or selecting. Searching may also be carried out by ML/AI models. The search string entered may be used to query against the list of tags, labels, and/or annotations, and clinicians and/or ML/AI models may select items appearing in the search results to deploy to patients and/or clinicians in XR.”);
generating the content module from the one or more content elements (Morgan, Col. 39, lines 35-42, The exemplary Configuration Module may further comprise a goal and feedback development feature. According to this feature, clinicians and/or ML/AI models may create, generate, modify, configure, and/or deploy concise, goal-focused, actionable, and/or personalized feedback as items of text, audio, images, video, and/or rendered content. The exemplary feature is enabled through one or more of the items and/or steps described below.”); and
storing the content module in a computer-readable storage medium (Morgan, Col. 11, lines 24-59, “In other embodiments, the present XR Health Platform may utilize other computer networks; for example, a wide area network (WAN), local area network (LAN), or intranet. The host server may comprise a processor and a computer readable medium, such as random access memory (RAM). The processor is operable to execute certain programs for performing the present XR Health Platform and other computer program instructions stored in memory. Such processor may comprise a microprocessor (or any other processor) and may also include, for example, a display device, internal and external data storage devices, cursor control devices, and/or any combination of these components, or any number of different components, peripherals, input and output devices, and other devices. Such processors may also communicate with other computer-readable media that store computer program instructions, such that when the stored instructions are executed by the processor, the processor performs the acts described further herein. Those skilled in the art will also recognize that the exemplary environments described herein are not intended to limit application of the present XR Health Platform, and that alternative environments may be used without departing from the scope of the invention. Various problem-solving programs incorporated into the present XR Health Platform and discussed further herein, may utilize as inputs, data from a data storage device or location. In one embodiment, the data storage device comprises an electronic database. In other embodiments, the data storage device may comprise an electronic file, disk, or other data storage medium. The data storage device may store features of the disclosure applicable for performing the present XR Health Platform. The data storage device may also include other items useful to carry out the functions of the present XR Health Platform. In one example, the exemplary computer programs may further comprise algorithms designed and configured to perform the present XR Health Platform.”).
Claim 7: Morgan teaches the invention of claim 6, as discussed above, and further teaches further comprising: providing the content module to an extended reality playback device or other user device operated by a user (Morgan, Col. 2, lines 6-14, “XR device” refers to any device that can be used for simulating, viewing, engaging, experiencing, controlling and/or interacting with XR. This includes headsets, head-mounted displays (HMD), augmented reality glasses, 2D displays viewing XR content, 2D displays, 3D displays, computers, controllers, projectors, other interaction devices, mobile phones, speakers, microphones, cameras, headphones, haptic devices, and the like.” and Morgan, Col. 31, lines 29-42, “A hardware agnostic feature of the exemplary XR Platform Module allows systems within the XR Health Platform to work in a hardware agnostic manner and/or to be distributed at scale (see FIG. 13) and consists of one or more of the following items described below. The XR Health Platform may be paid for, downloaded, and/or updated remotely using XR and/or other web-based interface. Tooltips may be provided for showing patients how to use patient input methods with the tooltips automatically adjusting to point to the correct locations on the virtual representations of one or more real-world input devices. Deep links and/or other methods may be utilized to recognize a user's hardware device(s) and/or facilitate the remote delivery of compatible platform/software package(s).”); and
receiving, from the extended reality playback device or other user device, feedback collected based on user interaction with the content module executing on the extended reality playback device or other user device (Morgan, Col. 55, lines 18-34, “A real-time movement biofeedback feature of the exemplary Movement Module may be used for real-time biofeedback where the scoring in one or more gamified XR experiences is at least partially determined by establishing and/or maintaining physical movements and/or positions. The movements and/or positions are determined as described herein, and whereby visual and/or auditory stimuli provide real-time feedback in terms of the correctness or incorrectness of the movements and/or physical positions. This correctness or incorrectness results in a higher or lower score, respectively. In one embodiment of this feature, scores are proportional to the angle and height of the controllers relative to the HMD as well as the length of time that this position is maintained. For example, if the patient holds his arms straight out in front of him at eye level, the score goes up proportionally to the time that this position is maintained.”).
Claim 8: Morgan teaches the invention of claim 7, as discussed above, and further teaches wherein the feedback comprises at least one of biometric feedback collected by the extended reality playback device or other user device, user input during the user interaction with the content module executing on the extended reality playback device or other user device(Morgan, Col. 55, lines 18-34, “A real-time movement biofeedback feature of the exemplary Movement Module may be used for real-time biofeedback where the scoring in one or more gamified XR experiences is at least partially determined by establishing and/or maintaining physical movements and/or positions. The movements and/or positions are determined as described herein, and whereby visual and/or auditory stimuli provide real-time feedback in terms of the correctness or incorrectness of the movements and/or physical positions. This correctness or incorrectness results in a higher or lower score, respectively. In one embodiment of this feature, scores are proportional to the angle and height of the controllers relative to the HMD as well as the length of time that this position is maintained. For example, if the patient holds his arms straight out in front of him at eye level, the score goes up proportionally to the time that this position is maintained.”), and third-party feedback based on the user interaction with the content module executing on the extended reality playback device or other user device (Morgan, Col. 53, lines 13-26, “Automated fitness assessment feature of the exemplary Movement Module comprises a system for an automated, semi-automated, clinician-supervised, and/or patient-self-directed physical fitness assessment. The exemplary assessment may comprise a pre-test safety assessment and/or other assessments completed using the Q&A feature discussed above. The exemplary assessment may further comprise a pre-configured and/or standardized set of physic al tasks completed in scenes, sessions, and/or regimens using items within the movement module, ML/AI models, and/or other platform features. The exemplary assessment may further comprise a pre-configured feedback/results report that automatically populates with any relevant data obtained during the assessment.”).
Claim 9: Morgan teaches the invention of claim 8, as discussed above, and further teaches wherein the content module is a healthcare content module, the user is a patient, and the third-party observer is a healthcare professional (Morgan, Col. 3, lines 31-45, “Virtual human avatar” refers to a humanoid virtual avatar which may be animated, simulated, programmatically controlled (using ML/AI models, for example), and/or represented through other types of rendered content and/or other media, and is designed to interact with, educate, instruct, demonstrate, advise, assist, guide, escort, diagnose, screen, test, treat, and/or manage disease(s) and/or health-related issues for patients in XR. Virtual human avatars may interact with patients and/or clinicians through spoken dialogue, text, rendered content, through visual means, and/or through any other method of communication. Virtual human avatars may possess characteristics that are virtual approximations and/or facsimiles of characteristics of real-world clinicians and/or patients. When used in this context, the term “virtual human avatar(s)” is synonymous with “digital twin(s)” and Morgan, Col. 13, lines 7-22, “The exemplary XR Health Platform includes features which are organized into different “modules” described further below. These exemplary modules are for organizational purposes only, and any set of features and/or any set of items within features may be combined with any set of other features and/or items described herein, irrespective of module(s). Each of the exemplary modules may comprise features applicable for creating, configuring, and/or deploying tailored, personalized, adaptive and/or problem-focused scenes, sessions, and/or regimens to deliver, perform, and/or deploy diagnostic tests, screening tests, therapeutic features, and/or care delivery features. These features enable clinicians and/or ML/AI models to create, modify, configure, administer, and/or orchestrate diagnostic, therapeutic, and/or care delivery solutions in XR. FIG. 4 provides a high-level overview of one embodiment of the XR Platform.”).
Claim 10: Morgan teaches the invention of claim 7, as discussed above, and further teaches further comprising: modifying the content module based on the feedback collected based on user interaction with the content module executing on the extended reality playback device or other user device (Morgan, Col. 30, lines 40-53, “FIGS. 11 and 12 illustrate two examples of the flow of information related to the creation, modification, configuration, and/or implementation of scenes, sessions, and/or regimens. In these embodiments, the creation of personalized scenes, sessions, and/or regimens starts with platform features being utilized to curate, collect, modify, and/or create points of platform data to be used as initial information inputs. In these embodiments, items within the patient-level profile feature, initial visit feature, history of present illness feature, health problem list feature, barrier management feature, Q&A feature, and/or within the goal and feedback development feature are utilized in curating, collecting, modifying, and/or creating initial information inputs.”).
Claim 11: Morgan teaches the invention of claim 10, as discussed above, and further teaches wherein the modifying of the content module based on the feedback comprises modifying the content module in real-time while the content module executes on the extended reality playback device or other user device (Morgan, Col. 55, lines 18-34, “A real-time movement biofeedback feature of the exemplary Movement Module may be used for real-time biofeedback where the scoring in one or more gamified XR experiences is at least partially determined by establishing and/or maintaining physical movements and/or positions. The movements and/or positions are determined as described herein, and whereby visual and/or auditory stimuli provide real-time feedback in terms of the correctness or incorrectness of the movements and/or physical positions. This correctness or incorrectness results in a higher or lower score, respectively. In one embodiment of this feature, scores are proportional to the angle and height of the controllers relative to the HMD as well as the length of time that this position is maintained. For example, if the patient holds his arms straight out in front of him at eye level, the score goes up proportionally to the time that this position is maintained.”).
Claim 12: Morgan teaches the invention of claim 10, as discussed above, and further teaches wherein the modifying of the content module is performed by one or more machine learning algorithms trained to modify the content module based on the feedback (Morgan, Col. 55, lines 2-34, “A 3D positional tracking feature of the exemplary Movement Module may use positional and/or rotational information for locations on a patient's body. This information may be obtained using three dimensional positional and/or rotational tracking data from XR hardware. In other exemplary embodiments, the information may be obtained using ML/AI models, including computer vision models applied to an image, images extracted from a video, and/or a series of images captured from cameras. In other exemplary embodiments, the information may be obtained using the application of ML/AI models that are not computer vision models. In other exemplary embodiments, the information may be obtained using biometric data. In other exemplary embodiments, the information may be obtained using acoustic and/or sound data through the use of microphones. A real-time movement biofeedback feature of the exemplary Movement Module may be used for real-time biofeedback where the scoring in one or more gamified XR experiences is at least partially determined by establishing and/or maintaining physical movements and/or positions. The movements and/or positions are determined as described herein, and whereby visual and/or auditory stimuli provide real-time feedback in terms of the correctness or incorrectness of the movements and/or physical positions. This correctness or incorrectness results in a higher or lower score, respectively. In one embodiment of this feature, scores are proportional to the angle and height of the controllers relative to the HMD as well as the length of time that this position is maintained. For example, if the patient holds his arms straight out in front of him at eye level, the score goes up proportionally to the time that this position is maintained.”).
Claim 13: Morgan teaches the invention of claim 6, as discussed above, and further teaches wherein the content module is generated by one or more machine learning algorithms trained to generate the content module based on one or more inputs comprising the subject of the content module (Morgan, Col. 55, lines 2-34, “A 3D positional tracking feature of the exemplary Movement Module may use positional and/or rotational information for locations on a patient's body. This information may be obtained using three dimensional positional and/or rotational tracking data from XR hardware. In other exemplary embodiments, the information may be obtained using ML/AI models, including computer vision models applied to an image, images extracted from a video, and/or a series of images captured from cameras. In other exemplary embodiments, the information may be obtained using the application of ML/AI models that are not computer vision models. In other exemplary embodiments, the information may be obtained using biometric data. In other exemplary embodiments, the information may be obtained using acoustic and/or sound data through the use of microphones. A real-time movement biofeedback feature of the exemplary Movement Module may be used for real-time biofeedback where the scoring in o