DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Status of Application
This action is in reply to the application filed March 26, 2025.
Claims 5 and 12 are amended.
Claims 1-20 are pending.
Information Disclosure Statement
The information disclosure statement submitted March 26, 2025 and its contents have been considered.
Claim Rejections - 35 U.S.C. § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-20 are rejected under 35 U.S.C. § 112(b) or 35 U.S.C. § 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor, or for pre-AIA the applicant regards as the invention.
Claims 1, 2, 9, 14, and 15: The term “likeliness in appearance to the user” in claims 1, 2, 9, 14, and 15 is a relative expression that renders the claims indefinite. The term “likeliness in appearance to the user” is not defined by the claims, the specification does not provide a standard for ascertaining the requisite degree, and one of ordinary skill in the art would not be reasonably apprised of the scope of the invention.
Claims 2-8, 10-13, and 15-20 are rejected for incorporating the deficiencies of the rejected claims on which they respectively depend.
Claim Rejections - 35 U.S.C. § 103
The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. § 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-5, 7-18, and 20 are rejected under AIA 35 U.S.C. § 103 as being unpatentable over Dieker et al. (U.S. Pub. No. 2021/0264802 A1) (hereinafter “Dieker”) in view of Magriniet al. (“Anorexia Nervosa, Body Image Perception and Virtual Reality Therapeutic Applications: State of the Art and Operational Proposal.” Int. J. Environ. Res. Public Health, 21 February 2022, 19, 2533).
Claims 1, 9, and 14: Dieker, as shown, discloses the following limitations:
a processor (see at least ¶ [0011]: an apparatus for a therapeutic treatment of a human subject with environmental anxiety disorder. The apparatus generally consists of sensors, video displays, computer processor, and memory storage devices. A control module comprising a computer processor is communicatively coupled to a simulation data store, the simulation data store has machine-readable values for computer-generated features in a computer-simulated environment in which the human subject is immersed and tasked to perform an executive function; see also at least ¶ [0038]);
a memory operatively coupled to the processor, the memory having instructions stored thereon for treating eating disorders, wherein execution of the instructions by the processor (see at least ¶¶ [0011] and [0038]), cause the processor to:
continuously render a first humanoid computer object in an augmented- or virtual-reality environment selected from a plurality of humanoid bodies […] (see at least ¶ [0062]: a rendering module 1015 is communicatively coupled to the control module 1001. The rendering module is further communicatively coupled to a graphic processing unit (GPU) 1020 that generates the visual objects in the computer-simulated environment. A visual display device 1025 is communicatively coupled to the rendering module 1015 and the GPU 1020);
stop rendering of the first humanoid computer object based on a stop condition associated with an anxiety level value or score received from the user (see at least ¶ [0067]: an anxiety threshold datastore 1060 is communicatively coupled to the control module 1001. The anxiety threshold datastore 1060 stores an upper anxiety state value constant 1065 representing a diminished physiological capability of performing executive functions. A lower anxiety state value constant 1070 is associated with a sufficiently low physiological anxiety state whereby executive functions may be successfully performed with additional stress-induced anxiety. The upper anxiety state value constant 1065 and the lower anxiety state value constant 1070 are computed by one or more quantitative factors selected including, but not limited to, pulse rate, oxygen level, respiration rate, skin temperature and diaphoresis; see also at least ¶ [0068]); and
continuously render a second humanoid computer object in the augmented- or virtual- reality environment selected from the plurality of humanoid bodies, […] (see at least ¶ [0069]: responsive to a low result returned from the anxiety threshold function 1075, the control module 1001 instructs the rendering module 1015 to increase the values of the sensory variables 1035 to thereby increase the amount of visual and audible information generated by the rendering module 1015 and presented within the computer-simulated environment. Responsive to a high result returned from the anxiety threshold function 1075, the control module 1001 instructs the rendering module 1015 to decrease the values of the sensory variables 1035 to thereby decrease the amount of visual and audible information generated by the rendering module 1015 and presented within the computer-simulated environment. Finally, responsive to an inbounds result returned from the anxiety threshold function 1075, the control module 1001 instructs the rendering module 1015 to maintain substantially the same values of the sensory variables 1035 to thereby sustain the same amount of visual and audible information generated by the rendering module 1015 and presented within the computer-simulated environment).
Particularly regarding claim 9, Dieker discloses the following features not necessarily addressed by the above:
establishing a baseline anxiety level value or score for a patient at a commencement of an eating disorder therapy (see at least ¶ [0020]: the method includes establishing a baseline anxiety level for the human subject wherein the baseline is assessed by automatically monitoring the phenotypic anxiety level of the human subject by one or more computer coupled sensors. The baseline may be obtained prior to the start of the computer simulation or within a computer simulation that is relatively “idle” without significant interaction or tasks assigned to the human subject. Once the baseline anxiety level is obtained, the human subject is tasked with an executive function wherein the human subject is fully or partially immersed in the computer-simulated environment for a time-limited session; see also at least ¶ [0067]: an anxiety threshold datastore 1060 is communicatively coupled to the control module 1001. The anxiety threshold datastore 1060 stores an upper anxiety state value constant 1065 representing a diminished physiological capability of performing executive functions. A lower anxiety state value constant 1070 is associated with a sufficiently low physiological anxiety state whereby executive functions may be successfully performed with additional stress-induced anxiety. The upper anxiety state value constant 1065 and the lower anxiety state value constant 1070 are computed by one or more quantitative factors selected including, but not limited to, pulse rate, oxygen level, respiration rate, skin temperature and diaphoresis);
receiving, via the processor, a selection of a treatment module of a plurality of treatment modules, wherein each treatment module treats a different fear associated with an eating disorder (see at least ¶ [0018]: Responsive to a low result returned from the anxiety threshold function, the control module instructs the rendering module to increase the values of the sensory variables to thereby increase the amount of visual and audible information generated (e.g., simulation complexity) by the rendering module and presented within the computer-simulated environment. Responsive to a high result returned from the anxiety threshold function, the control module instructs the rendering module to decrease the values of the sensory variables to thereby decrease the amount of visual and audible information generated by the rendering module and presented within the computer-simulated environment. Finally, responsive to an inbounds result returned from the anxiety threshold function, the control module instructs the rendering module to maintain substantially the same values of the sensory variables to thereby sustain the same amount of visual and audible information generated by the rendering module and presented within the computer-simulated environment; see also at least ¶ [0028]: the features of the computer-simulated environment may include audible noise, audio volume, quantity of visual objects in the environment, movement of visual objects in the environment, polygon count of rendered objects in the environment, lighting complexity of rendered objects in the environment, texture complexity of rendered objects in the environment, olfactory dispersions, tactile feedback frequency to the human subject, tactile feedback intensity to the human subject, frames per second rendered and simulation event repetition. Responsive to a decrease in the sensor-detected anxiety level, the computer automatically reintroduces sensory complexity to the computer-simulated environment wherein the human subject therapeutically develops proficiency in executive functions in increasingly complex environments).
Dieker does not explicitly disclose, but Magriniet, as shown, teaches the following limitations:
a first humanoid computer object … defined at least by a gender type, body type, skin pigment, and associated weight, including a first selected body having an associated first weight value and a likeliness in appearance to the user (see p. 25: the patient, through a training process, at least initially guided by an operator, can thus adopt control strategies to learn to control the monitored function voluntarily. By controlling the monitored function, the subject will indirectly control the pathological situation related to it. Therefore, the proposed system consists of a VR environment in which the subject is placed in front of an avatar who reproduces his features, like a mirror. The identification of the subject in the avatar is reinforced by the use of wireless controllers or (alternatively) by the direct recognition of the hands through the cameras integrated into the headset: by moving the hands the subject will control those of the avatar that represents him. The overall appearance of the avatar will be made dynamic: while maintaining a similarity with the subject, the avatar will be able to change its status from underweight to overweight continuously; see also at least p. 26, Fig. 4),
a second humanoid computer object … including a second selected body having an associated second weight value and a likeliness in appearance to the user, wherein the second weight value is greater than the first weight value (see at least p. 25 and the analysis above; see also at least p. 26, Fig. 4).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the therapeutic techniques taught by Magriniet with the anxiety management systems disclosed by Dieker, because Magriniet teaches at p. 27 that “The standard multidisciplinary approach in the treatment of AN (psychiatric, psychological, nutritional) could therefore be profitably integrated with these techniques using measures to evaluate their impact on the specific target (body image) and on the overall clinical evolution areas of the young patient. Based on what emerged from the analysis, the possibility of designing a VR system seems important, not only as a research tool but also and above all as a potential complement to treatments for the disorder.” See M.P.E.P. § 2143(I)(G).
Moreover, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the therapeutic techniques taught by Magriniet with the anxiety management systems disclosed by Dieker, because the claimed invention is merely a combination of old elements (the therapeutic techniques taught by Magriniet and the anxiety management systems disclosed by Dieker), in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. See M.P.E.P. § 2143(I)(A).
Claims 2 and 15: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above.
Dieker does not explicitly disclose, but Magriniet, as shown, teaches the following limitations:
wherein the user is sequentially presented a series of humanoid computer objects in the augmented- or virtual-reality environment, the series of humanoid computer objects having (i) a predefined number of selected bodies having a low weight value and a likeliness in appearance to the user, (ii) a current predefined number of selected bodies having a current weight value and a likeliness in appearance to the user, and (iii) a predefined number of selected bodies having a high weight value and a likeliness in appearance to the user (see p. 25: the patient, through a training process, at least initially guided by an operator, can thus adopt control strategies to learn to control the monitored function voluntarily. By controlling the monitored function, the subject will indirectly control the pathological situation related to it. Therefore, the proposed system consists of a VR environment in which the subject is placed in front of an avatar who reproduces his features, like a mirror. The identification of the subject in the avatar is reinforced by the use of wireless controllers or (alternatively) by the direct recognition of the hands through the cameras integrated into the headset: by moving the hands the subject will control those of the avatar that represents him. The overall appearance of the avatar will be made dynamic: while maintaining a similarity with the subject, the avatar will be able to change its status from underweight to overweight continuously; see also at least p. 26, Fig. 4:
PNG
media_image1.png
527
602
media_image1.png
Greyscale
See also p. 26).
The rationales to modify/combine the teachings of Dieker to include the teachings of Magriniet are presented above regarding claims 1 and 14 and incorporated herein.
Claims 3 and 16: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above.
Dieker does not explicitly disclose, but Magriniet, as shown, teaches the following limitations:
wherein the low weight value, the current weight value, and the high weight value are selectable based on a percentage of the current weight of the patient (see p. 25: the patient, through a training process, at least initially guided by an operator, can thus adopt control strategies to learn to control the monitored function voluntarily. By controlling the monitored function, the subject will indirectly control the pathological situation related to it. Therefore, the proposed system consists of a VR environment in which the subject is placed in front of an avatar who reproduces his features, like a mirror. The identification of the subject in the avatar is reinforced by the use of wireless controllers or (alternatively) by the direct recognition of the hands through the cameras integrated into the headset: by moving the hands the subject will control those of the avatar that represents him. The overall appearance of the avatar will be made dynamic: while maintaining a similarity with the subject, the avatar will be able to change its status from underweight to overweight continuously; see also at least p. 26, Fig. 4.
The system selects these the values of the avatars, which are based on a percentage of the current weight of the patient).
The rationales to modify/combine the teachings of Dieker to include the teachings of Magriniet are presented above regarding claims 1 and 14 and incorporated herein.
Claims 4 and 17: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above.
Dieker does not explicitly disclose, but Magriniet, as shown, teaches the following limitations:
wherein the low weight value, the current weight value, and the high weight value are selectable based on a pre-defined offset of the current weight of the patient (see p. 25: the patient, through a training process, at least initially guided by an operator, can thus adopt control strategies to learn to control the monitored function voluntarily. By controlling the monitored function, the subject will indirectly control the pathological situation related to it. Therefore, the proposed system consists of a VR environment in which the subject is placed in front of an avatar who reproduces his features, like a mirror. The identification of the subject in the avatar is reinforced by the use of wireless controllers or (alternatively) by the direct recognition of the hands through the cameras integrated into the headset: by moving the hands the subject will control those of the avatar that represents him. The overall appearance of the avatar will be made dynamic: while maintaining a similarity with the subject, the avatar will be able to change its status from underweight to overweight continuously; see also at least p. 26, Fig. 4.
The system selects these the values of the avatars, which are based on a pre-defined offset of the current weight of the patient).
The rationales to modify/combine the teachings of Dieker to include the teachings of Magriniet are presented above regarding claims 1 and 14 and incorporated herein.
Claims 5 and 18: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above. Further, Dieker, as shown, discloses the following limitations:
wherein the first humanoid computer object is presented from a view of a camera defined in the augmented- or virtual-reality environment, wherein the camera is co-located to the first humanoid computer (see at least ¶ [0080]: clinical observations suggest abnormal gaze perception to be an important indicator of anxiety disorders. In addition, vigilance in anxiety disorders may be conveyed by fixations on sources of stress. These behaviors and others related to them may be monitored by eye-tracking by camera sensors and weighted to anxiety levels. Speech patterns may be linked to both diagnosis and immediate anxiety levels based on activation, tonality, and monotony among other characteristics; see also at least ¶¶ [0066] and [0096]).
Claims 7 and 20: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above. Further, Dieker, as shown, discloses the following limitations:
wherein the first humanoid computer object has a blocking mosaic over a facial region of the first humanoid computer object (see at least ¶ [0088]: FIG. 17 returns to the 7-year old subject diagnosed with Autism Level 1 as previously shown in FIGS. 7-8. Patient view 420 is shown with a video feed 430 of the patient interacting with blocks. Facial tracking points 440 are overlaid on video feed 430, which is translated into an anxiety value 450 along with body temperature 460, pulse 470 and speech volume 480).
Claim 8: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above. Further, Dieker, as shown, discloses the following limitations:
wherein the system comprises an AR-VR goggle (see at least ¶ [0096]: Display: The experiences can be delivered on a wide variety of display types. These currently include laptop, large screen TV, full wall projection, and full surround as enabled by a CAVE (Cave Automatic Virtual Environment), a VR (Virtual Reality), an AR (Augmented Reality) or MR (Mixed Reality) headset; see also at least ¶ [0062]: it might be the augmented reality HMD, such as those sold under the brand HOLOLENS by Microsoft Corporation. Alternatively, the display device 1025 may be single panel display monitors, groups of displays forming multi-panel display monitors, rear projection displays, front projection displays and virtual reality HMDs. These technologies provide different levels of realism and immersion into the computer-simulated environment).
Claim 10: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above.
Dieker does not explicitly disclose, but Magriniet, as shown, teaches the following limitations:
wherein the fear associated with the selected treatment module is a fear of weight gain (see p. 21: Porras-Garcia et al. [52] carried out several studies using a VR based embodiment method. Results showed a reduction in the body related anxiety, fear of gaining weight, body-related attentional bias and BI disturbances [46], see also pp. 7, 8, 13, and 20), and at the at least one change to the second humanoid computer object comprises an increased weight of the second humanoid computer object in contrast with the first humanoid computer object (see p. 25: the patient, through a training process, at least initially guided by an operator, can thus adopt control strategies to learn to control the monitored function voluntarily. By controlling the monitored function, the subject will indirectly control the pathological situation related to it. Therefore, the proposed system consists of a VR environment in which the subject is placed in front of an avatar who reproduces his features, like a mirror. The identification of the subject in the avatar is reinforced by the use of wireless controllers or (alternatively) by the direct recognition of the hands through the cameras integrated into the headset: by moving the hands the subject will control those of the avatar that represents him. The overall appearance of the avatar will be made dynamic: while maintaining a similarity with the subject, the avatar will be able to change its status from underweight to overweight continuously; see also at least p. 26, Fig. 4).
The rationales to modify/combine the teachings of Dieker to include the teachings of Magriniet are presented above regarding claims 1 and 14 and incorporated herein.
Claim 11: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above.
Dieker does not explicitly disclose, but Magriniet, as shown, teaches the following limitations:
wherein the fear associated with the selected treatment module is a fear of bodily sensations (see at least p. 20: to modify body-size perception through an illusion of ownership over a virtual body, Buche et al. [58] proposed to couple a tactile stimulation when viewing an avatar from a third-person perspective (a condition known to produce this kind of illusion). This application offers the possibility to choose between avatars of different builds and to perform morphing to reduce the avatar’s body. Moreover, the application allows to implicitly measure how people perceive their body size from an affordance estimation task in which people have to appreciate if they can pass through doors of different sizes without twisting their shoulders; see also at least pp. 21-22), and at the at least one change to the second humanoid computer object comprises one or more of increased bloating or of increased jiggling in contrast with the first humanoid computer object (see p. 25: the patient, through a training process, at least initially guided by an operator, can thus adopt control strategies to learn to control the monitored function voluntarily. By controlling the monitored function, the subject will indirectly control the pathological situation related to it. Therefore, the proposed system consists of a VR environment in which the subject is placed in front of an avatar who reproduces his features, like a mirror. The identification of the subject in the avatar is reinforced by the use of wireless controllers or (alternatively) by the direct recognition of the hands through the cameras integrated into the headset: by moving the hands the subject will control those of the avatar that represents him. The overall appearance of the avatar will be made dynamic: while maintaining a similarity with the subject, the avatar will be able to change its status from underweight to overweight continuously; see also at least p. 26, Fig. 4).
The rationales to modify/combine the teachings of Dieker to include the teachings of Magriniet are presented above regarding claims 1 and 14 and incorporated herein.
Claim 12: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above.
Dieker does not explicitly disclose, but Magriniet, as shown, teaches the following limitations:
wherein the fear associated with the selected treatment module is a fear of weight gain (see p. 21: Porras-Garcia et al. [52] carried out several studies using a VR based embodiment method. Results showed a reduction in the body related anxiety, fear of gaining weight, body-related attentional bias and BI disturbances [46], see also pp. 7, 8, 13, and 20), and the at least one change to the second humanoid computer object comprises a weight gain to selected body part of area of the second humanoid computer object in contrast with the first humanoid computer object (see p. 25: the patient, through a training process, at least initially guided by an operator, can thus adopt control strategies to learn to control the monitored function voluntarily. By controlling the monitored function, the subject will indirectly control the pathological situation related to it. Therefore, the proposed system consists of a VR environment in which the subject is placed in front of an avatar who reproduces his features, like a mirror. The identification of the subject in the avatar is reinforced by the use of wireless controllers or (alternatively) by the direct recognition of the hands through the cameras integrated into the headset: by moving the hands the subject will control those of the avatar that represents him. The overall appearance of the avatar will be made dynamic: while maintaining a similarity with the subject, the avatar will be able to change its status from underweight to overweight continuously; see also at least p. 26, Fig. 4).
The rationales to modify/combine the teachings of Dieker to include the teachings of Magriniet are presented above regarding claims 1 and 14 and incorporated herein.
Claim 13: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above.
Dieker does not explicitly disclose, but Magriniet, as shown, teaches the following limitations:
detecting, via the processor, that that the user has consumed a product; and in response to the detecting, presenting, via the processor, a third humanoid computer object in the augmented- or virtual-reality environment selected, wherein the third humanoid computer object includes at least one change based on the fear associated with the selected treatment module (see p. 25: the patient, through a training process, at least initially guided by an operator, can thus adopt control strategies to learn to control the monitored function voluntarily. By controlling the monitored function, the subject will indirectly control the pathological situation related to it. Therefore, the proposed system consists of a VR environment in which the subject is placed in front of an avatar who reproduces his features, like a mirror. The identification of the subject in the avatar is reinforced by the use of wireless controllers or (alternatively) by the direct recognition of the hands through the cameras integrated into the headset: by moving the hands the subject will control those of the avatar that represents him. The overall appearance of the avatar will be made dynamic: while maintaining a similarity with the subject, the avatar will be able to change its status from underweight to overweight continuously; see also at least p. 26, Fig. 4.
The user consumes the VR experience, as well as the stimuli provided thereby).
The rationales to modify/combine the teachings of Dieker to include the teachings of Magriniet are presented above regarding claims 1 and 14 and incorporated herein.
Claims 6 and 19 are rejected under AIA 35 U.S.C. § 103 as being unpatentable over Dieker et al. (U.S. Pub. No. 2021/0264802 A1) (hereinafter “Dieker”) in view of Magriniet al. (“Anorexia Nervosa, Body Image Perception and Virtual Reality Therapeutic Applications: State of the Art and Operational Proposal.” Int. J. Environ. Res. Public Health, 21 February 2022, 19, 2533) and further in view of Connor et al. (U.S. Pub. No. 2022/0415476 A1) (hereinafter “Connor”).
Claims 6 and 19: The combination of Dieker and Magriniet teaches the limitations as shown in the rejection above.
Dieker and Magriniet do not explicitly disclose, but Connor, as shown, teaches the following limitations:
wherein the camera has a pre-defined height level that limits presentation of the first humanoid computer object to exclude a facial region of the first humanoid computer object (see at least ¶ [0169]: a system for nutritional monitoring and management can include a wearable device with a camera, wherein the device is worn like a watch, worn like a necklace, worn on clothing (like a button), worn like a finger ring, or worn like an ear ring. In an example, the focal direction and/or distance of a camera can adjusted in real time to record images of food, but minimizing privacy-intruding images of people or other objects. In an example, a camera can be kept oriented toward a person's hand so that nearby people are generally not in focus in images. In an example, face recognition and/or pattern recognition can be used to automatically blur privacy-intruding portions of an image such as other people's faces. In an example, the focal range of a camera can be adjusted in real time to automatically blur privacy-intruding portions of an image such as other people’s faces).
It would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the privacy protection techniques taught by Connor with the anxiety management systems disclosed by Dieker (as modified by Magriniet), because Connor teaches at ¶ [0169] that its techniques “minimizing privacy-intruding images of people.” See M.P.E.P. § 2143(I)(G).
Moreover, it would have been obvious to a person having ordinary skill in the art before the effective filing date of the claimed invention to combine the privacy protection techniques taught by Connor with the anxiety management systems disclosed by Dieker (as modified by Magriniet), because the claimed invention is merely a combination of old elements (the privacy protection techniques taught by Connor, the therapeutic techniques taught by Magriniet, and the anxiety management systems disclosed by Dieker), in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. See M.P.E.P. § 2143(I)(A).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. The following references have been cited to further show the state of the art with respect to visual- and augmented-reality therapy systems.
Samec et al. (U.S. Pub. No. 2020/0312038 A1) (Augmented reality display system for evaluation and modification of neurological conditions, including visual processing and perception conditions.)
Mölbert et al. (“Assessing body image in anorexia nervosa using biometric self-avatars in virtual reality: Attitudinal components rather than visual body size estimation are distorted.” Psychological Medicine. 2018;48(4):642-653).
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Christopher Tokarczyk, whose telephone number is 571-272-9594. The examiner can normally be reached Monday-Thursday between 6:00 AM and 4:00 PM Eastern.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Mamon Obeid, can be reached at 571-270-1813. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER B TOKARCZYK/ Primary Examiner, Art Unit 3687