Prosecution Insights
Last updated: April 19, 2026
Application No. 18/731,462

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND STORAGE MEDIUM

Non-Final OA §101§102§103
Filed
Jun 03, 2024
Examiner
VELAZQUEZ VALENCI, AMELIA NMN
Art Unit
2612
Tech Center
2600 — Communications
Assignee
Canon Medical Systems Corporation
OA Round
1 (Non-Final)
Grant Probability
Favorable
1-2
OA Rounds
2y 10m
To Grant

Examiner Intelligence

Grants only 0% of cases
0%
Career Allow Rate
0 granted / 0 resolved
-62.0% vs TC avg
Minimal +0% lift
Without
With
+0.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 10m
Avg Prosecution
7 currently pending
Career history
7
Total Applications
across all art units

Statute-Specific Performance

§101
8.3%
-31.7% vs TC avg
§103
70.8%
+30.8% vs TC avg
§102
16.7%
-23.3% vs TC avg
§112
4.2%
-35.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 0 resolved cases

Office Action

§101 §102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Priority Receipt is acknowledged that application claims priority to foreign application with application number JP2023-093357 dated June 06, 2023. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Information Disclosure Statement The IDS dated 06/03/2024 has been considered and placed in the application file. Specification The disclosure is objected to because of the following informalities: On page 22, line 18, “S114”, should be labeled accordingly with Fig. 11, “S214”. Appropriate correction is required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-14 are rejected under 35 U.S.C. 101 because the claimed invention is directed to an abstract idea, without significantly more. The claims recite a generic abstract idea of representing an avatar and data (“motion history”), but does not recite significantly more than the abstract idea of representing an avatar and motion history, which could be performed by a mental process. More specifically, claims 1, 13, and 14 recite the limitations of “acquiring a motion history of a patient who represents themselves as an avatar and gets a checkup at a clinic in a metaverse”. The acquiring limitation is a process that, under broadest reasonable interpretation, covers the performance of the limitation in the mind but for recitation of an avatar in a metaverse. That is, other than reciting “in a metaverse”, nothing in the claim precludes the acquiring step from practically being performed in the human mind. For example, the claim encompasses the user to mentally represent the avatar in the user’s mind and imagine a motion history to be added to the avatar. This limitation is a mental process. In addition, the determining limitation (“Determining a specific motion for estimating a state of the patient by analyzing the motion history; and”) is also a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, the claim encompasses the user to imagine a specific motion according to the motion history. Thus, this limitation is also a mental process. Lastly, the outputting limitation (“Outputting information based on the specific motion via an output interface”) is also a process that, under its broadest reasonable interpretation, covers performance of the limitation in the mind. For example, the claim encompasses the user to mentally combine/add the motion history and the specific motion to the avatar to create a modified image in the user’s mind. The judicial exception is not integrated into a practical application because the claim fails to explicitly recite any additional elements other than the abstract idea itself, except for reciting an avatar represented in a metaverse. This generic representation limitation is no more than merely applying the exception to a generic metaverse. Accordingly, this generic representation limitation does not integrate the abstract idea into a practical application because it does not impose any meaningful limits on practicing the abstract idea. Claims 1, 13, and 14 do not include additional elements that are sufficient to amount to significantly more than the judicial exception because the generic representation limitation is no more than merely applying the exception to a generic metaverse, which are well-understood, routine, conventional computer functions/components as recognized by the court decisions listed in MPEP § 2106.05(d) (e.g., use of a computer for electronic recordkeeping, Alice Corp., 134 S. Ct. at 2359, 110 USPQ2d at 1984 (creating and maintaining "shadow accounts"); Ultramercial, 772 F.3d at 716, 112 USPQ2d at 1755 (updating an activity log). Furthermore, “considered as an ordered combination, the computer components of [applicant’s] method add nothing that is not already present when the steps are considered separately.” Alice v. CLS Bank, 134 S. Ct. 2347, 110 USPQ2d 1976, 1985 (2014). Therefore, the Examiner concludes that the claims do not recite significantly more than the abstract idea, and consequently are ineligible. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claims 1-4, 7-8, and 11-14 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by US Patent Application Publication US 2023/0360772 A1, (Manteau-Rao et al.) (hereinafter Manteau-Rao). Regarding claim 1, Manteau-Rao teaches an information processing device comprising processing circuitry, the processing circuitry performing: (Manteau-Rao “[0117] Some embodiments may utilize a VRCT engine to perform one or more parts of process 1850, e.g., as part of a VR application, stored and executed by one or more of the processors and memory of a headset, server, tablet and/or other device. For instance, a VRCT engine may be incorporated in, e.g., as one or more components of, head-mounted display 201 and/or other systems of FIGS. 19-22.”; and “[0163] The arrangement shown in FIG. 21 includes one or more…processors 960…These components may be housed on a local computing system or may be remote components in wired or wireless connection with a local computing system (e.g., a remote server, a cloud, a mobile device, a connected device, etc.).”) acquiring a motion history of a patient who represents themself as an avatar and gets a check-up at a clinic in a metaverse; (Manteau-Rao Abstract, “[0029] a VRCT platform may comprise one or more automatic speech recognition system and natural language processing applications as well as biometric sensing, recording, and tracking systems for building biometric models for comparisons, diagnostics, recommendations for, e.g., treatment and/or intervention, etc.”; “[0032] …may include a digital hardware and software medical device that uses VR for health care, focusing on mental, physical, and neurological rehabilitation, including various biometric sensors, such as sensors to measure and record heart rate, respiration, temperature, perspiration, voice/speech (e.g., tone, intensity, pitch, etc.), eye movements, facial movements, jaw movements, hand and feet movements, neural and brain activities, etc. For instance, voice biomarkers and analyzers may be used to assess and track emotional states and/or determine intensity values for emotions.”; “[0030] In the context of the VRCT system, the word “patient” may generally be considered equivalent to a subject, user, participant, student, etc…”; “[0083] In scenario 1100, a patient avatar may enter a virtual room…the patient may acclimate herself to the virtual world. For instance, a patient may view the hands of their avatar in front of their face or resting on their lap. To facilitate comfortability in the virtual environment, a patient may be asked to raise their hands in front of headset and move them.”; “[0033] The VR device may be used in a clinical environment…the VR device may be configured for remote sessions and remote monitoring. A therapist or supervisor…may monitor the experience in the same room or remotely…a therapist may be physically remote or in the same room as the patient.”; and “[0171] Processor(s) 960 may also execute instructions for constructing an instance of virtual space. The instance may be hosted on an external server and may persist and undergo changes even when a participant is not logged in to said instance. In some embodiments, the instance may be participant-specific, and the data required to construct it may be stored locally. In such an embodiment, new instance data may be distributed as updates that users download from an external source into local memory. In some exemplary embodiments, the instance of virtual space may include a virtual volume of space, a virtual topography (e.g., ground, mountains, lakes), virtual objects, and virtual characters (e.g., non-player characters “NPCs”). The instance may be constructed and/or rendered in 2D or 3D. The rendering may offer the viewer a first-person or third-person perspective. A first-person perspective may include displaying the virtual world from the eyes of the avatar and allowing the patient to view body movements from the avatar's perspective. A third-person perspective may include displaying the virtual world from, for example, behind the avatar to allow someone to view body movements from a different perspective. The instance may include properties of physics, such as gravity, magnetism, mass, force, velocity, and acceleration, which cause the virtual objects in the virtual space to behave in a manner at least visually similar to the behaviors of real objects in real space.”) determining a specific motion for estimating a state of the patient by analyzing the motion history; and (Manteau-Rao Figs. 18A-18C, “[0032] …voice biomarkers and analyzers may be used to assess and track emotional states and/or determine intensity values for emotions.”; “[0071]…a chart like FIG. 18B may track biometric data depicting a patient calming down or improving emotional state…a decline in a biometric indicating improvement in a patient's emotional state, emotional conditions, and associated bio-physical states, conditions, and/or markers—e.g., lower perspiration, lower heart rate, lower blood pressure, improved respiration, fewer involuntary movements, etc...”; “[0116] …neural networks may be trained based on survey data and biometric data and used to determine if new biometric data may indicate a patient might relapse, staying steady, or improving.”; “[0128] …data may be collected to train a neural network to, e.g., categorize emotional states and/or quantify intensity values based on biometric readings…by a single patient's data and/or a collection of patient data to recognize changes in emotional state.”; and “[0138] …a decrease of values during the time between the first biometric measurement to the second biometric measurement using one or more sensors, such as a temperature measurement, a facial tracker, and a camera and/or light sensor, may identify that a patient is likely less angry…Body sensors may collect movement data as first and second biometric values to determine, e.g., if a patient is shaking more or less.”). The Examiner notes “specific motion” in the claim limitation can be any motion since Applicant fails to define a special kind or type of motion in their disclosure. Applicant states on page 11, lines 17-19, “For example, the motion analyzing function 122 extracts a self-disclosure motion as the specific motion out of a series of motions of the patient P included in the motion history.” outputting information based on the specific motion via an output interface. (Manteau-Rao Fig. 22,“[0153] HMD 201…VR headsets typically include a processor...”; “[0172] …processor(s) 960 executes a program to transmit data for the 3D model to another component of the computing environment (or to a peripheral component in communication with the computing environment) that is capable of displaying the model, such as display 950.”; and “[0173] The 3D model may be representative of a virtual entity that can be displayed in an instance of virtual space, e.g., an avatar. The virtual entity is capable of interacting with the virtual topography, virtual objects, and virtual characters within virtual space. The virtual entity is controlled by a user's movements, as interpreted by sensors 992 communicating with the system. Display 950 may display a Patient View. The patient's real-world movements are reflected by the avatar in the virtual world.”) Claim 13 is directed to an information processing method that is performed by a computer, (Manteau-Rao “[0117] Some embodiments may utilize a VRCT engine to perform one or more parts of process 1850, e.g., as part of a VR application, stored and executed by one or more of the processors and memory of a headset, server, tablet and/or other device. For instance, a VRCT engine may be incorporated in, e.g., as one or more components of, head-mounted display 201 and/or other systems of FIGS. 19-22.”; and “[0163] The arrangement shown in FIG. 21 includes one or more…processors 960…These components may be housed on a local computing system or may be remote components in wired or wireless connection with a local computing system (e.g., a remote server, a cloud, a mobile device, a connected device, etc.).”) and its steps are similar to the scope and functions performed by the device claim 1 and therefore claim 13 is also rejected with the same rationale as specified in the rejection of claim 1. Claim 14 is directed to a non-transitory computer-readable storage medium storing a program, the program causing a computer to perform: (Manteau-Rao “[0117] Some embodiments may utilize a VRCT engine to perform one or more parts of process 1850, e.g., as part of a VR application, stored and executed by one or more of the processors and memory of a headset, server, tablet and/or other device. For instance, a VRCT engine may be incorporated in, e.g., as one or more components of, head-mounted display 201 and/or other systems of FIGS. 19-22.”; and “[0163] The arrangement shown in FIG. 21 includes one or more…processors 960…These components may be housed on a local computing system or may be remote components in wired or wireless connection with a local computing system (e.g., a remote server, a cloud, a mobile device, a connected device, etc.).”) and its scope and functions are similar to the scope and functions performed by the device claim 1 and therefore claim 14 is also rejected with the same rationale as specified in the rejection of claim 1. Regarding claim 2, Manteau-Rao teaches wherein the motion history includes one or both of a motion history of the patient in the metaverse and a motion history of the patient in the real world (Manteau-Rao “[0032] …biometric sensors, such as sensors to measure and record heart rate, respiration, temperature, perspiration, voice/speech (e.g., tone, intensity, pitch, etc.), eye movements, facial movements, jaw movements, hand and feet movements, neural and brain activities, etc.”; “[0070] FIG. 7 depicts a VR system with exemplary components of a VRCT platform including several biometric sensors…The biometric sensors measure and record a variety of biometric data including heart rate, respiration, temperature, perspiration, voice/speech (e.g., tone, intensity, pitch, etc.), eye movements, facial movements, mouth and jaw movements, hand and feet movements, neural and brain activities, etc., throughout the Cognitive Therapy session.”; “[0155] …in FIG. 19B, sensors 202 are placed on the body in particular places to measure body movement and relay the measurements for translation and animation of a VR avatar.”; “[0158] A VR environment rendering engine on HMD 201 (sometimes referred to herein as a “VR application”), such as the Unreal Engine™, uses the position and orientation data to create an avatar that mimics the patient's movement.”; and “[0159] A patient or player may “become” their avatar when they log in to a virtual reality activity. When the player moves their body, they see their avatar move accordingly. Sensors in the headset may allow the patient to move the avatar's head, e.g., even before body sensors are placed on the patient. A system that achieves consistent high-quality tracking facilitates the patient's movements to be accurately mapped onto an avatar.”) Regarding claim 3, Manteau-Rao teaches wherein the motion history includes at least one of a motion history before the patient gets a check-up at the clinic, a motion history after the patient has gotten a check-up at the clinic, and a motion history when the patient is getting a check-up at the clinic (Manteau-Rao “[0070] Concurrently, as the user or patient starts or enters in the VR environment, biometric sensors start to measure and record biometric data of the patient for building biometric models for comparisons, diagnostics, and recommendations.”; “[0071] …biometric data may be used to correlate with the state of emotional wellness of the patient at the start of the Cognitive Therapy session, throughout the exercises, and at the end of the session…a chart like FIG. 18B may track biometric data depicting a patient calming down or improving emotional state, e.g., experiencing less intensity for one or more emotions and/or thoughts over the session.”; “[0182] Motion data may include specific range-of-motion (ROM) data that may be saved about the patient's movement over the course of each activity and session, so that therapists can compare session data to previous sessions' data.”; “[0115] Patient biometric data may be taken before, during, or at the end of a VRCT exercise and used as a comparison.”; and “[0132] The biometric data can be used to correlate with the state of emotional wellness of the patient at the start of the Cognitive Therapy session, throughout the exercises, and at the end of the session.”) Regarding claim 4, Manteau-Rao teaches wherein the processing circuitry performs: determining a degree of return indicating a degree to which the patient is able to return to the real world on the basis of the specific motion; and outputting information based on the degree of return via the output interface (Manteau-Rao Figs. 1-6 & 8-18, “[0069] The Cognitive Therapy session starts with the “Catch It” exercise in which a detailed example is illustrated in FIG. 8.”; “[0074] …process 800 of FIG. 8…in Step 809…The therapist asks the patient to verbalize, say, and rate the intensity level of their emotion, for example from a scale of 1 to 10…emotion bubbles may change color to reflect the emotional intensity level of the patient. The bubble color intensity may use bright red to represent intense anger…the therapist invites the patient to allow their mind to wander into thoughts related to the situation and describe them as they arise, in Step 810. The spoken thoughts then appear or materialize in virtual clouds in the VR environment. The therapist then helps to weed out thoughts that are not workable or applicable, e.g., thoughts that are expression about emotions (using natural language processing (NLP)) technology, in Step 811. Finally, in this example, the therapist asks the patient to select the troublesome thought (of a situation) with their gaze, in Step 812. Only that identified thought remains in a cloud.”; “[0075] FIG. 9 illustrates the “Check It” exercises, e.g., in process 900.”; “[0076] In Step 902…the VR therapist provides two columns underneath the thought in the ledger, e.g., (1) evidence for the thoughts in a first column and (2) evidence against the thoughts in a second column…In Step 903, the VR therapist invites the patient to start listing out loud evidence supporting the thought and then list evidence against the thought…In Step 904, the VR therapist provides the evidence and counterevidence as lists that appear on the ledger as the patient speaks…In Step 905, the ledger page gets turned on the ledger so that only evidence against the old thought is shown.”; “[0077] In Step 910, the VR therapist encourages the patient to share a warm, compassionate response in the form of new a thought the patient may think of based on the evidence against from the ledger…In Step 911, the virtual friend expresses gratitude for friendly help from the patient. The virtual friend's facial expression may change to reflect emotional relief.”; “[0078] To complete the Cognitive Therapy according to this disclosure, FIG. 10 illustrates the “Change It” exercises.”; and “[0072] to facilitate engagement and to simulate greeting gestures, the patient is instructed by the VR Cognitive Therapy program to raise their hands in front of the VR head mounted display (HMD)…”). The Examiner notes it is implicit that a user or “patient” may “return to the real world” after the therapy session is complete or once self-disclosure and motion makes it apparent. Manteau-Rao uses a scale and intensity levels (or ratings) as measurements for a “degree of return” to indicate a patient’s emotional state. Ultimately this is up to the patient, however, the process itself initiates a “degree of return”. The process utilizes a conversational approach to determine a “degree of return” through a completion of questions and exercises—allowing the patient to express their ability to “return to the real world”. Regarding claim 7, Manteau-Rao teaches wherein the processing circuitry performs: additionally outputting a vocal sound of the first avatar via the output interface; and (Manteau-Rao “[0087] …the VRCT platform will receive voice input 1122, e.g., using a microphone in connection with the HMD (e.g., via sound card or USB interface). For instance, patient voice input may be captured as an audio signal using the microphone built into the HMD.”) changing the vocal sound of the first avatar in the metaverse according to the degree of return (Manteau-Rao “[0076] The patient may specify the gender, age, height, weight, body style, ethnicity, voice, hair style, clothing style, etc…”; “[0103] …using a synthetized voice…e.g., voice cloning and/or voice conversion to allow a virtual avatar to speak with the voice of a patient's real friend with services such as Descript's Overdub and Respeecher. To script the virtual friend's speech, the VRCT platform may use speech synthesis directly with a model of a selected real-world friend's voice to create spoken audio, e.g., for scenario 1400. In some embodiments, the VRCT platform may generate speech in any voice and then use voice conversion to modify the speech to the selected voice...”; and “[0069] The Cognitive Therapy session starts with the “Catch It” exercise in which a detailed example is illustrated in FIG. 8.”; and “[0074] …process 800 of FIG. 8…in Step 809…The therapist asks the patient to verbalize, say, and rate the intensity level of their emotion, for example from a scale of 1 to 10…emotion bubbles may change color to reflect the emotional intensity level of the patient. The bubble color intensity may use bright red to represent intense anger…the therapist invites the patient to allow their mind to wander into thoughts related to the situation and describe them as they arise, in Step 810. The spoken thoughts then appear or materialize in virtual clouds in the VR environment. The therapist then helps to weed out thoughts that are not workable or applicable, e.g., thoughts that are expression about emotions (using natural language processing (NLP)) technology, in Step 811. Finally, in this example, the therapist asks the patient to select the troublesome thought (of a situation) with their gaze, in Step 812. Only that identified thought remains in a cloud.”) Regarding claim 8, Manteau-Rao teaches wherein the processing circuitry changes the vocal sound of the first avatar such that the vocal sound is less like a real vocal sound of the patient as the degree of return becomes lower and changes the vocal sound of the first avatar such that the vocal sound is more like the real vocal sound of the patient as the degree of return becomes higher (Manteau-Rao “[0076] The patient may specify the gender, age, height, weight, body style, ethnicity, voice, hair style, clothing style, etc…”; “[0103] …using a synthetized voice…e.g., voice cloning and/or voice conversion to allow a virtual avatar to speak with the voice of a patient's real friend with services such as Descript's Overdub and Respeecher. To script the virtual friend's speech, the VRCT platform may use speech synthesis directly with a model of a selected real-world friend's voice to create spoken audio, e.g., for scenario 1400. In some embodiments, the VRCT platform may generate speech in any voice and then use voice conversion to modify the speech to the selected voice...”; “[0069] The Cognitive Therapy session starts with the “Catch It” exercise in which a detailed example is illustrated in FIG. 8.”; and “[0074] …process 800 of FIG. 8…in Step 809…The therapist asks the patient to verbalize, say, and rate the intensity level of their emotion, for example from a scale of 1 to 10…emotion bubbles may change color to reflect the emotional intensity level of the patient. The bubble color intensity may use bright red to represent intense anger…the therapist invites the patient to allow their mind to wander into thoughts related to the situation and describe them as they arise, in Step 810. The spoken thoughts then appear or materialize in virtual clouds in the VR environment. The therapist then helps to weed out thoughts that are not workable or applicable, e.g., thoughts that are expression about emotions (using natural language processing (NLP)) technology, in Step 811. Finally, in this example, the therapist asks the patient to select the troublesome thought (of a situation) with their gaze, in Step 812. Only that identified thought remains in a cloud.”) Regarding claim 11, Manteau-Rao teaches wherein the processing circuitry performs: outputting a first avatar which is an avatar of the patient in the metaverse via the output interface; and (Manteau-Rao “[0083] In scenario 1100, a patient avatar may enter a virtual room or setting such as a virtual therapy room. Once the patient avatar is in the virtual room, the patient may acclimate herself to the virtual world. For instance, a patient may view the hands of their avatar in front of their face or resting on their lap. To facilitate comfortability in the virtual environment, a patient may be asked to raise their hands in front of headset and move them.”; and “[0153] HMD 201 is a piece central to immersing a patient in a virtual world in terms of presentation and movement. A headset may allow, for instance, a wide field of view (e.g., 110°) and tracking along six degrees of freedom.”) making an external environment viewed from the first avatar in the metaverse abstract according to the degree of return (Manteau-Rao Fig. 11, “[0069] …the user or patient can select or construct a customized virtual reality environment or space for the session…the user can choose an office setting or an outdoor setting…The user can choose size of the office, the color scheme, the lighting, the furniture, etc. that would be most comfortable for him or her. Alternatively, the user can choose an outdoor setting such as a park or beach as the place for the Cognitive Therapy session. In addition, the user can choose various background features, such as nature scenes, background sounds, lighting, etc.”; “[0069] The Cognitive Therapy session starts with the “Catch It” exercise in which a detailed example is illustrated in FIG. 8.”; and “[0074] …process 800 of FIG. 8…in Step 809…The therapist asks the patient to verbalize, say, and rate the intensity level of their emotion, for example from a scale of 1 to 10…emotion bubbles may change color to reflect the emotional intensity level of the patient. The bubble color intensity may use bright red to represent intense anger…the therapist invites the patient to allow their mind to wander into thoughts related to the situation and describe them as they arise, in Step 810. The spoken thoughts then appear or materialize in virtual clouds in the VR environment. The therapist then helps to weed out thoughts that are not workable or applicable, e.g., thoughts that are expression about emotions (using natural language processing (NLP)) technology, in Step 811. Finally, in this example, the therapist asks the patient to select the troublesome thought (of a situation) with their gaze, in Step 812. Only that identified thought remains in a cloud.”) Regarding claim 12, Manteau-Rao teaches wherein the processing circuitry changes the external environment such that the external environment is less like the real world as the degree of return becomes lower and changes the external environment such that the external environment is more like the real world as the degree of return becomes higher (Manteau-Rao “[0069] …the user or patient can select or construct a customized virtual reality environment or space for the session…the user can choose an office setting or an outdoor setting…The user can choose size of the office, the color scheme, the lighting, the furniture, etc. that would be most comfortable for him or her. Alternatively, the user can choose an outdoor setting such as a park or beach as the place for the Cognitive Therapy session. In addition, the user can choose various background features, such as nature scenes, background sounds, lighting, etc.”; “[0151] The large sensor 202B (e.g., a wireless transmitter module) and small sensors 202 are equipped with mechanical and electrical components that measure position and orientation in physical space and then translate that information to construct a virtual environment.”; “[0069] The Cognitive Therapy session starts with the “Catch It” exercise in which a detailed example is illustrated in FIG. 8.”; and “[0074] …process 800 of FIG. 8…in Step 809…The therapist asks the patient to verbalize, say, and rate the intensity level of their emotion, for example from a scale of 1 to 10…emotion bubbles may change color to reflect the emotional intensity level of the patient. The bubble color intensity may use bright red to represent intense anger…the therapist invites the patient to allow their mind to wander into thoughts related to the situation and describe them as they arise, in Step 810. The spoken thoughts then appear or materialize in virtual clouds in the VR environment. The therapist then helps to weed out thoughts that are not workable or applicable, e.g., thoughts that are expression about emotions (using natural language processing (NLP)) technology, in Step 811. Finally, in this example, the therapist asks the patient to select the troublesome thought (of a situation) with their gaze, in Step 812. Only that identified thought remains in a cloud.”) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or non-obviousness. Claims 5-6 and 9-10 are rejected under 35 U.S.C. 103 as being unpatentable and obvious over US Patent Application Publication US 2023/0360772 A1, (Manteau-Rao et al.) (hereinafter Manteau-Rao) in view of Japanese Patent JP7161802B1, (Uchida Shigeki) (hereinafter Shigeki). Regarding claim 5, Manteau-Rao teaches wherein the processing circuitry performs: outputting a first avatar which is an avatar of the patient in the metaverse via the output interface; and (Manteau-Rao “[0083] In scenario 1100, a patient avatar may enter a virtual room or setting such as a virtual therapy room. Once the patient avatar is in the virtual room, the patient may acclimate herself to the virtual world. For instance, a patient may view the hands of their avatar in front of their face or resting on their lap. To facilitate comfortability in the virtual environment, a patient may be asked to raise their hands in front of headset and move them.”; and “[0153] HMD 201 is a piece central to immersing a patient in a virtual world in terms of presentation and movement. A headset may allow, for instance, a wide field of view (e.g., 110°) and tracking along six degrees of freedom.”) making an appearance of the first avatar in the metaverse according to the degree of return (Manteau-Rao “[0082] …a patient may choose characteristics of their avatar such as height, weight, skin color, gender, clothing, etc…Avatar customization may be a straightforward user interface or series of menus…The avatar customizations may be stored in a patient or therapist profile, e.g., in local memory and/or at in a cloud server…avatars may be rendered based on the parameters using VR application based on, e.g., software-development environment.”; “[0069] The Cognitive Therapy session starts with the “Catch It” exercise in which a detailed example is illustrated in FIG. 8.”; and “[0074] …process 800 of FIG. 8…in Step 809…The therapist asks the patient to verbalize, say, and rate the intensity level of their emotion, for example from a scale of 1 to 10…emotion bubbles may change color to reflect the emotional intensity level of the patient. The bubble color intensity may use bright red to represent intense anger…the therapist invites the patient to allow their mind to wander into thoughts related to the situation and describe them as they arise, in Step 810. The spoken thoughts then appear or materialize in virtual clouds in the VR environment. The therapist then helps to weed out thoughts that are not workable or applicable, e.g., thoughts that are expression about emotions (using natural language processing (NLP)) technology, in Step 811. Finally, in this example, the therapist asks the patient to select the troublesome thought (of a situation) with their gaze, in Step 812. Only that identified thought remains in a cloud.”) However, Manteau-Rao is silent about abstraction. Shigeki teaches abstract (Shigeki “[0008] …a feature part extraction means for extracting a feature part in the user identification avatar, and an abstraction processing means for performing abstraction processing to the feature part of the user identification avatar according to the disclosure level.”; “[0029] Specifically, the public avatar adjustment unit 3 includes a characteristic portion extraction unit 10 that extracts a characteristic portion from the basic avatar, and an abstraction processing unit 11 that performs abstraction processing on the characteristic portion extracted by the characteristic portion extraction unit 10.”; and “[0040] …the basic avatar is not made completely non-disclosed, but the basic avatar in which only the feature portion is abstracted is displayed, so that there is an advantage that privacy protection or the like can be realized without impairing the atmosphere of a service such as a game Or an SNS.”) Manteau-Rao and Shigeki are analogous art as both are related to avatars or virtual entities in the metaverse. Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Manteau-Rao by abstraction as taught by Shigeki and use that within Manteau-Rao’s avatar in the virtual reality cognitive therapy (VRCT) device. The motivation for the above is for customizing an avatar’s appearance so that a user can choose to be anonymous or not—enhancing the overall virtual experience. Regarding claim 6, Manteau-Rao teaches wherein the processing circuitry changes the appearance of the first avatar of the patient as the degree of return becomes lower and changes the appearance of the first avatar of the patient as the degree of return becomes higher (Manteau-Rao “[0082] …a patient may choose characteristics of their avatar such as height, weight, skin color, gender, clothing, etc…Avatar customization may be a straightforward user interface or series of menus…The avatar customizations may be stored in a patient or therapist profile, e.g., in local memory and/or at in a cloud server…avatars may be rendered based on the parameters using VR application based on, e.g., software-development environment.”; “[0083] In scenario 1100, a patient avatar may enter a virtual room or setting such as a virtual therapy room. Once the patient avatar is in the virtual room, the patient may acclimate herself to the virtual world. For instance, a patient may view the hands of their avatar in front of their face or resting on their lap. To facilitate comfortability in the virtual environment, a patient may be asked to raise their hands in front of headset and move them.”; “[0069] The Cognitive Therapy session starts with the “Catch It” exercise in which a detailed example is illustrated in FIG. 8.”; and “[0074] …process 800 of FIG. 8…in Step 809…The therapist asks the patient to verbalize, say, and rate the intensity level of their emotion, for example from a scale of 1 to 10…emotion bubbles may change color to reflect the emotional intensity level of the patient. The bubble color intensity may use bright red to represent intense anger…the therapist invites the patient to allow their mind to wander into thoughts related to the situation and describe them as they arise, in Step 810. The spoken thoughts then appear or materialize in virtual clouds in the VR environment. The therapist then helps to weed out thoughts that are not workable or applicable, e.g., thoughts that are expression about emotions (using natural language processing (NLP)) technology, in Step 811. Finally, in this example, the therapist asks the patient to select the troublesome thought (of a situation) with their gaze, in Step 812. Only that identified thought remains in a cloud.”) However, Manteau-Rao is silent about such that the appearance is less like a real appearance and such that the appearance is more like the real appearance. Shigeki teaches such that the appearance is less like a real appearance and such that the appearance is more like the real appearance (Shigeki “[0048] …in the embodiment, the disclosure level determination unit 9 determines the determination level, but the user may change the determination level after the determination by the disclosure level determination unit 9. Similarly, in the initial stage, the user himself / herself may set all the disclosure levels, and the determination level may be gradually changed in accordance with the determination by the disclosure level determination unit 9 as time elapses…the appearance of the characteristic portion may be deformed when the disclosure level becomes high, instead of performing processing such that the abstraction level becomes high as the disclosure level becomes low.”; “[0041] By subdividing the disclosure level, the disclosure level of the basic avatar is maximized (i.e., the basic avatar is displayed without abstracting the feature portion)…There is an advantage that it is possible to perform a fine response such as a moderate disclosure level (abstracting a feature portion to some extent)...”; and “[0043] The characteristic portion modification unit 12, as one mode of abstraction processing means, is for abstracting the appearance of the characteristic portion extracted by the characteristic portion extraction unit 10 by changing the appearance to a degree corresponding to the disclosure level.”) Manteau-Rao and Shigeki are analogous art as both are related to avatars or virtual entities in the metaverse. Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Manteau-Rao by making the appearance less or more like a real appearance as taught by Shigeki and use that within Manteau-Rao’s patient avatar in the virtual reality cognitive therapy (VRCT) device. The motivation for the above is for customizing an avatar’s appearance so that a user can choose to be anonymous or not—enhancing the overall virtual experience. This allows the user to set their own preferences on whether their corresponding avatar in the metaverse looks like them or is different. Regarding claim 9, Manteau-Rao teaches wherein the processing circuitry performs: outputting a second avatar which is an avatar of a doctor of the clinic in the metaverse via the output interface; and (Manteau-Rao “[0030] and the term ‘therapist’ may generally be considered equivalent to doctor…A real-world therapist may configure the system and/or monitor via a clinician tablet, which may be considered equivalent to a personal computer, laptop, mobile device, gaming system, or display.”; “[0031] Some embodiments may use a ‘virtual therapist’ and/or a ‘therapist avatar’, which may be used interchangeably herein. As part of a VRCT platform, a virtual therapist may comprise (and/or work in conjunction with) a virtual assistant and automatic speech recognition (ASR) service working in conjunction with a natural language processing (NLP). A therapist avatar may be considered an on-screen avatar of a virtual therapist. In some embodiments, other non-playable avatars may be controlled by a virtual therapist and/or a VRCT platform and feature a different appearance, voice, and/or other virtual characteristics.”; “[0081] Scenario 1100 may be displayed to a patient view via the head-mounted display, e.g., “Patient View.” In some embodiments, a head-mounted display (HMD) may generate a Patient View as a stereoscopic 3D image representing a first-person view of the virtual interface with which the patient may interact. An HMD may transmit Patient View, or a non-stereoscopic version, as “Spectator View” to, e.g., a clinician tablet for display.”; “[0149] Headset 201 may also provide visual feedback of virtual reality applications in concert with the clinician tablet and the small and large sensors.”; and “[0153] HMD 201 may comprise more than one connectivity option in order to communicate with the therapist's tablet.”) making an appearance of the second avatar in the metaverse according to the degree of return (Manteau-Rao Fig. 8, “[0069] …as illustrated in Step 801 of FIG. 8, the user can select or create an avatar therapist for the Cognitive Therapy session. The user can select the age, gender, skin color, hair color, hair style, clothes, voice, weight, height, and/or any other characteristics for an avatar therapist…In this example of a "Catch It" exercise, as Step 802, the patient enters the virtual therapy room, and the patient can see the customized therapist avatar that he or she created.”; “[0082] …VR environment…a patient may also choose characteristics for a therapist avatar such as height, weight, skin color, gender, hairstyle, clothing style, etc…Avatar customization may be a straightforward user interface or series of menus. In some embodiments, a patient profile may be recorded and the avatar customization(s) associated with the patient and/or device may only need to be entered once. The avatar customizations may be stored in a patient or therapist profile, e.g., in local memory and/or at in a cloud server. Once physical and/or visual parameters for one or more avatars are input, or accessed from saved preferences, avatars may be rendered based on the parameters using VR application based on, e.g., software-development environment.”; “[0069] The Cognitive Therapy session starts with the “Catch It” exercise in which a detailed example is illustrated in FIG. 8.”; and “[0074] …process 800 of FIG. 8…in Step 809…The therapist asks the patient to verbalize, say, and rate the intensity level of their emotion, for example from a scale of 1 to 10…emotion bubbles may change color to reflect the emotional intensity level of the patient. The bubble color intensity may use bright red to represent intense anger…the therapist invites the patient to allow their mind to wander into thoughts related to the situation and describe them as they arise, in Step 810. The spoken thoughts then appear or materialize in virtual clouds in the VR environment. The therapist then helps to weed out thoughts that are not workable or applicable, e.g., thoughts that are expression about emotions (using natural language processing (NLP)) technology, in Step 811. Finally, in this example, the therapist asks the patient to select the troublesome thought (of a situation) with their gaze, in Step 812. Only that identified thought remains in a cloud.”) However, Manteau-Rao is silent about abstraction. Shigeki teaches abstract (Shigeki “[0008] …a feature part extraction means for extracting a feature part in the user identification avatar, and an abstraction processing means for performing abstraction processing to the feature part of the user identification avatar according to the disclosure level.”; “[0029] Specifically, the public avatar adjustment unit 3 includes a characteristic portion extraction unit 10 that extracts a characteristic portion from the basic avatar, and an abstraction processing unit 11 that performs abstraction processing on the characteristic portion extracted by the characteristic portion extraction unit 10.”; and “[0040] …the basic avatar is not made completely non-disclosed, but the basic avatar in which only the feature portion is abstracted is displayed, so that there is an advantage that privacy protection or the like can be realized without impairing the atmosphere of a service such as a game Or an SNS.”) Manteau-Rao and Shigeki are analogous art as both are related to avatars or virtual entities in the metaverse. Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Manteau-Rao by abstraction as taught by Shigeki and use that within Manteau-Rao’s avatar in the virtual reality cognitive therapy (VRCT) device. The motivation for the above is for customizing an avatar’s appearance, and allowing a user to be comfortable with how to represent the avatars. Regarding claim 10, Manteau-Rao teaches wherein the processing circuitry changes the appearance of the second avatar of the doctor as the degree of return becomes lower and changes the appearance of the second avatar of the doctor as the degree of return becomes higher (Manteau-Rao Fig. 8, “[0069] Perhaps more importantly, as illustrated in Step 801 of FIG. 8, the user can select or create an avatar therapist for the Cognitive Therapy session. The user can select the age, gender, skin color, hair color, hair style, clothes, voice, weight, height, and/or any other characteristics for an avatar therapist to create the most comfortable engagement for him or her. In this example of a "Catch It" exercise, as Step 802…customized therapist avatar...”; “[0082] Prior to entering a VR environment…a patient may also choose characteristics for a therapist avatar such as height, weight, skin color, gender, hairstyle, clothing style, etc…Avatar customization may be a straightforward user interface or series of menus...”; “[0069] The Cognitive Therapy session starts with the “Catch It” exercise in which a detailed example is illustrated in FIG. 8.”; and “[0074] …process 800 of FIG. 8…in Step 809…The therapist asks the patient to verbalize, say, and rate the intensity level of their emotion, for example from a scale of 1 to 10…emotion bubbles may change color to reflect the emotional intensity level of the patient. The bubble color intensity may use bright red to represent intense anger…the therapist invites the patient to allow their mind to wander into thoughts related to the situation and describe them as they arise, in Step 810. The spoken thoughts then appear or materialize in virtual clouds in the VR environment. The therapist then helps to weed out thoughts that are not workable or applicable, e.g., thoughts that are expression about emotions (using natural language processing (NLP)) technology, in Step 811. Finally, in this example, the therapist asks the patient to select the troublesome thought (of a situation) with their gaze, in Step 812. Only that identified thought remains in a cloud.”) However, Manteau-Rao is silent about such that the appearance is less like a real appearance and such that the appearance is more like the real appearance. Shigeki teaches such that the appearance is less like a real appearance and such that the appearance is more like the real appearance (Shigeki “[0048] …in the embodiment, the disclosure level determination unit 9 determines the determination level, but the user may change the determination level after the determination by the disclosure level determination unit 9. Similarly, in the initial stage, the user himself / herself may set all the disclosure levels, and the determination level may be gradually changed in accordance with the determination by the disclosure level determination unit 9 as time elapses…the appearance of the characteristic portion may be deformed when the disclosure level becomes high, instead of performing processing such that the abstraction level becomes high as the disclosure level becomes low.”; “[0041] By subdividing the disclosure level, the disclosure level of the basic avatar is maximized (i.e., the basic avatar is displayed without abstracting the feature portion)…There is an advantage that it is possible to perform a fine response such as a moderate disclosure level (abstracting a feature portion to some extent)...”; and “[0043] The characteristic portion modification unit 12, as one mode of abstraction processing means, is for abstracting the appearance of the characteristic portion extracted by the characteristic portion extraction unit 10 by changing the appearance to a degree corresponding to the disclosure level.”) Manteau-Rao and Shigeki are analogous art as both are related to avatars or virtual entities in the metaverse. Therefore, it would have been obvious for a person having ordinary skill in the art before the effective filing date of the claimed invention to have modified Manteau-Rao by making the appearance less or more like a real appearance as taught by Shigeki and use that within Manteau-Rao’s doctor avatar in the virtual reality cognitive therapy (VRCT) device. The motivation for the above is for customizing an avatar’s appearance. This allows the user to set their own preferences on how certain avatars should look like. Pertinent Art The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure. US Patent Application Publication US 2023/0038695 A1 (Yee et al.) discloses customizable virtual reality activities depending on a user’s abilities US Patent Application Publication US 2019/0385711 A1 (Shriberg et al.) discloses systems for assessing a mental state of a subject US Patent Application Publication US 2018/0254097 A1 (Gani et al.) discloses simulation systems for effecting behavior change US Patent Application Publication US 2018/0193589 A1 (McLaughlin et al.) discloses an immersive system for health and wellness Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to AMELIA VELAZQUEZ VALENCIA whose telephone number is (571)272-7418. The examiner can normally be reached M-F, 8:30AM-5:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Said A. Broome can be reached at (571) 272-2931. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.V.V/Examiner, Art Unit 2612 /Said Broome/Supervisory Patent Examiner, Art Unit 2612 Date: 1/29/2026
Read full office action

Prosecution Timeline

Jun 03, 2024
Application Filed
Feb 05, 2026
Non-Final Rejection — §101, §102, §103 (current)

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
Grant Probability
2y 10m
Median Time to Grant
Low
PTA Risk
Based on 0 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month