Prosecution Insights
Last updated: April 19, 2026
Application No. 17/986,135

METHOD AND SYSTEM FOR REMEMBERING ACTIVITIES OF PATIENTS WITH PHYSICAL DIFFICULTIES AND MEMORIES OF THE DECEASED ON METAVERSE PLATFORM

Non-Final OA §103§112
Filed
Nov 14, 2022
Examiner
SAMS, MICHELLE L
Art Unit
2611
Tech Center
2600 — Communications
Assignee
Korea Electronics Technology Institute
OA Round
3 (Non-Final)
76%
Grant Probability
Favorable
3-4
OA Rounds
2y 11m
To Grant
84%
With Interview

Examiner Intelligence

Grants 76% — above average
76%
Career Allow Rate
364 granted / 481 resolved
+13.7% vs TC avg
Moderate +8% lift
Without
With
+8.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 11m
Avg Prosecution
10 currently pending
Career history
491
Total Applications
across all art units

Statute-Specific Performance

§101
16.1%
-23.9% vs TC avg
§103
51.5%
+11.5% vs TC avg
§102
10.6%
-29.4% vs TC avg
§112
14.8%
-25.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 481 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 08/28/2025 has been entered. Response to Amendment The amendment filed 08/13/2025 has been entered and made of record. Claims 7-11 are withdrawn. Claims 1-6, 12-15 are pending. Response to Arguments Applicant's arguments filed 08/13/2025 have been fully considered but they are not persuasive. Applicant argues JONES et al. (2009/0044113 A1) in view of TAYLOR et al. (2020/0289938 A1) fails to teach the limitations of claim 1. Specifically, Applicant argues Taylor does not correct the image and voice data when the user is unhealthy. From the rationale of claim 1, Jones is relied upon as correcting the image and voice data of the user when they are injured to a point when they were healthy. Taylor is relied upon as training an AI model of input data [0032, claim 11]. The system/method of Taylor applies the input data to the avatar [claim 11]. In the combined invention, Taylor is trained based on input data of when the user is healthy. Jones also teaches collecting data when the user is healthy [0023, 0062] to generate a model to apply to an avatar [0092]. Therefore, the avatar is presented based on the data obtained. Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claims 1-6, 12-15 are rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 1 was amended to recite, “An avatar generation module configured to generate a virtual avatar reflecting a facial image of the user in a metaverse, based on the corrected image and voice data of the user.” Applicant’s disclosure teaches the avatar generator module (130) may generate a virtual avatar reflecting a facial image … based on the corrected image (see [0039, 0050] of Pub. No. 2023/0196057 A1) but does not teach using the corrected voice data of the user to generate a virtual avatar reflecting a facial image of the user in a metaverse. Applicant’s disclosure provides support for correcting a user’s speech based on corrected voice data (see [0038] of Pub No. 2023/0196057 A1). It is recommended to amend claim 1 to also include generating a virtual avatar to reflect the voice of the user in a metaverse. Claims 2-5 are further rejected under 35 U.S.C. 112(a) due to their dependency of rejected claim 1. Claim 6 recites similar amendment as claim 1 but in process form. Therefore, the same 35 U.S.C. 112(a) rejection used for claim 1 is applied. Claims 12-15 are further rejected under 35 U.S.C. 112(a) due to their dependency of rejected claim 6. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 1-6, 12-15 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 1 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being incomplete for omitting essential steps, such omission amounting to a gap between the steps. See MPEP § 2172.01. The omitted steps are: obtaining image and voice data of the user when the user was healthy before having the disease. This data is needed to train the AI model in order to generate a corrected image and corrected voice data as claimed. Claims 2-5 fail to provide the essential steps needed of claim 1. Claims 2-5 are further rejected under 35 U.S.C. 112(b) due to their dependency of rejected claim 1. Claim 6, recites similar amendment as claim 1 but in process form. Therefore, the same 35 U.S.C. 112(b) rejection used for claim 1 is applied. Claims 12-15 are further rejected under 35 U.S.C. 112(b) due to their dependency of rejected claim 6. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-3, 6, 12, 13 are rejected under 35 U.S.C. 103 as being unpatentable over JONES et al. (2009/0044113 A1) in view of TAYLOR et al. (2020/0289938 A1). RE claim 1, Jones teaches a system and method that captures detectable attributes of a user which can further be applied to an avatar [abstract]. Jones further teaches a metaverse platforming operating system comprising: (a) a data collection unit configured to collect image and voice data of a user, captured when the user was unhealthy after having a disease; Fig. 1, capture system (110) captures a selection of one or more detectable features (114) of a user in two or more dimensions [0023]. Detectable features (114) may include, but are not limited to, structural (skin characteristics and movement, muscle structure and movement, bone structure and movement), external (describe features of a user which can be captured from capturing and analyzing visual and sensed images of a user) (said collect image of a user) and behavioral features (words spoken by a user, voice patterns of user speech, verbal responses, non-verbal responses) (said collect voice data of a user) [0023]. Jones provides the example when a user suffers from an injury (said user was unhealthy), but has a previously captured set of attributes, regeneration system (610) receives current captured user attributes and compares the currently captured user attributes (said captured when the user was unhealthy) with the previously captured set of user attributes to detect the changes from the injury [0093]. Although Jones teaches an example of capturing the user after an injury, it would have been obvious before the effective filing date of the claimed invention that the current captured unhealthy images of the user could be a result from an open-ended list of issues, such as a disease. The main objective of Jones is to reconstruct the user’s current attributes to a previous version of the user [0092-0093]. (b) a voice and image correction module configured to correct an image and a voice of the user using a trained AI model, by inputting the collected image and voice data to the trained Al model, wherein the trained AI model is configured to generate a corrected image and corrected voice data of the user, corresponding to image and voice data of the user when the user was healthy before having the disease; Fig. 1, capture system (110) analyzes object data for one or more detectable features (114) collected by one or more capture devices, generates user attributes (112) specifying the structural, external, and behavioral characteristics of one or more attributes of a user, and outputs user attributes (112) [0024]. User attributes (112) may include 3D structural model of a user, 3D external model of a user, a voice characteristic model, and a behavioral or personality model [0024]. User attributes (112) may include one or more types of models that represent a particular portion of a user’s detectable features or represents a user’s body type and proportions [0025]. Differential system (130) receives user attributes (112) (said inputting the collected image and voice data) and compares user attributes (112) with normalized models (136) (said corresponding to when user was healthy) to detect the differences and outputs the differences as differential attributes (132) [0027, 0068-0074]. Differential attributes (132) specifies those attributes of a user which distinguish the user from other users [0076]. Capture devices (302) transmit object data (314) to one or more user attributes generator (304) and training controller (310) (said model) [0057]. Training controller (310) receives object data (314) and associates object data (314) with a particular behavioral attribute [0058]. Capture system (110) may include training scripts (312) to use in prompting a user to perform actions, speak words, or respond to questions that are applicable to the adjustable characteristics of a base avatar (144) (said the trained AI model … corresponding to image and voice data of the user when the user was healthy before having the disease) [0062]. Fig. 6, storage system (150) may receive one or more user attributes (112), differential attributes (132), or custom avatar (142) at one time or over multiple points in time [0091]. A regeneration system (610) (said voice and image correction module) stores user attributes or differential attributes for a particular user. At a later point in time, if the user needs to have reconstructive surgery to recreate a user feature as captured and stored at a previous point in time (said corresponding to image and voice data of the user when the user was healthy before having the disease), regeneration system (610) adjusts current user attributes with the captured user attributes or differential attributes from a previous point in time to generate recreated user attributes (612) [0092]. Recreated user attributes (612) specify previously captured user attributes such that a user may request an avatar creator system create a custom avatar for the user based on the user’s captured characteristics at a particular point in time (said correct an image and a voice) [0092]. Jones provides the example of when a user suffers from an injury. The regeneration system (610) receives current captured user attributes and compares the currently captured user attributes with the previously captured set of user attributes to detect the changes from the injury. From the difference detected, regeneration system (610) may prompt a user to select which of the portions of the previously captured user attributes which are different from the current user attributes should be applied to the current attributes [0093]. Jones teaches correcting a voice but fails to disclose using a trained AI model. Taylor teaches a system/method for training a character for a game. Fig. 1A-1, virtual scenes are displayed on an HMD or another display device [0028]. The AI model (AI1A) receives the pattern (26) as an AI input (18) and the pattern (28) as an AI input (20), and learns from the patterns (26, 28) to generate an AI output (22), which is a learned method [0030]. Fig. 1A-2 illustrates training of the AI model (AHA) by a user (A) so that the artificial intelligence model (AHA) reacts in the same or similar manner in which the user (A) reacts during a play of the game (said trained AI model) [0032]. Taylor teaches receiving a recording of voice data of the user during the display of the one or more scenes of the game (said a data collection unit configured to collect image and voice data of a user) and integrating the voice data with the inputs used to train the AI model to enable the character to apply the voices (said correction module configured to correct voice of the user using a trained AI model) [claim 11 language]. It would have been obvious before the effective filing date of the claimed invention to utilize an AI model to apply the corrected voice rather than the user selecting a model, as taught by Jones, because an AI model can be adaptive and adjust vocal characteristics to maintain realism across different scenarios, which can be difficult to achieve with static voice selections. (c) an avatar generation module configured to generate a virtual avatar reflecting a facial image of the user in a metaverse, based on the corrected image and voice data of the user; and Custom avatar (142) may a virtual representation of a user communicating within a chat room or via other network based communication channel (said metaverse) [0032]. Fig. 5, avatar creator system (140) (said avatar generation module) receives differential attributes (132) and applies them to a base avatar (508) to output the customized avatar to one or more environments (said generate a virtual avatar) [0078]. In one example, base avatars (508) define a graphical image that is adjustable to represent a mirror image of the actual image of a person [0080]. In another example, base avatars (508) define a graphical image that is adjustable to include those attributes which distinguish a person based on the captured image of the person (said virtual avatar reflecting a facial image of a user) [0080]. As disclosed in the rationale of claim 1(b) regeneration system (610) (said correction module) receives current captured user attributes and compares the currently captured user attributes with the previously captured set of user attributes to detect the changes from the injury. From the difference detected, regeneration system (610) may prompt a user to select which of the portions of the previously captured user attributes which are different from the current user attributes should be applied to the current attributes (said based on the corrected image) [0093]. Furthermore, as taught in the rationale of claim 1(b), capture system (110) may include training scripts (312) to use in prompting a user to perform actions, speak words, or respond to questions that are applicable to the adjustable characteristics of a base avatar (144) (said voice data) [0062]. Fig. 6, regeneration system (610) stores user attributes or differential attributes for a particular user. At a later point in time, if the user needs to have reconstructive surgery to recreate a user feature as captured and stored at a previous point in time (said corrected voice data of the user), regeneration system (610) adjusts current user attributes with the captured user attributes or differential attributes from a previous point in time to generate recreated user attributes (612) [0092]. (d) an avatar action control module configured to control, using the trained AI model, an action of the generated avatar according to an inputted input signal. Jones teaches the custom avatar (142) may be used as a virtual representation of a user communicating within a chat room [0032], however does not go into depth on the logistics of controlling the avatar. Taylor teaches a system/method for training a character for a game. Fig. 1A-1, virtual scenes are displayed on an HMD or another display device [0028]. The AI model (AI1A) receives the pattern (26) as an AI input (18) and the pattern (28) as an AI input (20), and learns from the patterns (26, 28) to generate an AI output (22), which is a learned method [0030]. Fig. 1A-2 illustrates training of the AI model (AHA) by a user (A) so that the artificial intelligence model (AHA) reacts in the same or similar manner in which the user (A) reacts during a play of the game (said trained AI model) [0032]. The user uses a hand-held controller (108) to generate input data (116) [0036]. The servers (A-C) analyze the input data (116) to generate one or more image frames of a virtual scene (110) in which the character (C1), which represents the artificial intelligence model (AI1A), performs certain actions (said control, using the trained AI model, an action of the generated avatar according to an inputted input signal control, using the trained AI model, an action of the generated avatar according to an inputted input signal) [0039]. The artificial intelligence model (AI1A) is mapped with or linked to, the user account (1) [0039]. The servers (A-C) analyze the input data (116) to determine or identify one or more interaction patterns (119) of the character (C1) associated with the input data (116) [0040]. The servers (A-C) store the one or more interaction patterns (119) within memory devices as a training program to train the artificial intelligence model (AI1A). The one or more interaction patterns (119) are provided as inputs to the artificial intelligence model (AI1A) to enable the artificial intelligence model (AI1A) to learn from the one or more interaction patterns (119), and the learned methods or operations are applied by the artificial intelligence model (AI1A) to new virtual scenes, which are different from the virtual scene (110) [0041]. The new virtual scenes are displayed from image frames that are generated by execution of the game program by one or more of the servers (A-C). The learned methods or operations may also be applied to virtual scene that are similar to or the same as the virtual scene (110) [0041]. It would have been obvious before the effective filing date of the claimed invention to utilize the trained AI model of Taylor to control the avatar of Jones because the learning by the artificial intelligence model reduces an amount of input data being transferred during the session between a client and service while providing a better experience to the user. The reduction in the amount of input data reduces an amount of network traffic being transferred between the client and server. The reduction in the amount of network traffic increases the speed of transfer of network data between the client and servers. As such, when the artificial intelligence model is trained, the input data that is transferred via a computer network is reduced to decrease network latency [Taylor: 0010]. RE claim 2, Jones teaches wherein the voice and image correction module is configured to realize a figure of the user when the user was healthy, by correcting a figure corresponding to at least one of a decrepit/damaged figure of user's face, a twisted figure of the face, an injury to the face and body, a lean figure of the face and body, a skin tone, and hair loss, which are caused by a disease and a side effect of treatment of the user, in the collected image data. The language of claim 2 recites, “at least one of”, which limits the claim to needing only one of the limitations. Therefore, Jones teaches wherein the voice and image correction module is configured to realize a figure of the user when the user was healthy, by correcting a figure corresponding to an injury to the face and body in the collected data. It should be noted that since only one limitation is required, the limitations of decrepit/damaged figure of user's face, a twisted figure of the face, a lean figure of the face and body, a skin tone, and hair loss are mute. Fig. 6, Jones teaches storage system (150) may receive one or more user attributes (112), differential attributes (132), or custom avatar (142) at one time or over multiple points in time [0091]. A regeneration system (610) (said voice and image correction module) stores user attributes or differential attributes for a particular user. At a later point in time, if the user needs to have reconstructive surgery to recreate a user feature as captured and stored at a previous point in time (said when the user was healthy), regeneration system (610) adjusts current user attributes with the captured user attributes or differential attributes from a previous point in time to generate recreated user attributes (612) [0092]. Recreated user attributes (612) specify previously captured user attributes such that a user may request an avatar creator system create a custom avatar for the user based on the user’s captured characteristics at a particular point in time [0092]. Jones provides the example if a user suffers from an injury (said an injury to the face and body), but has a previously captured set of attributes (said when the user was healthy), regeneration system (610) receives current captured user attributes and compares the currently captured user attributes with the previously captured set of user attributes to detect the changes from the injury. From the difference detected, regeneration system (610) may prompt a user to select which of the portions of the previously captured user attributes which are different from the current user attributes should be applied to the current attributes (said by correcting a figure) [0093]. RE claim 3, Jones teaches wherein the voice and image correction module is configured to realize a conversation style that the user used when the user was healthy, by correcting a stammering portion of the user and restoring a sentence formed of a short sentence in the collected voice data. Fig. 1, capture system (110) captures a selection of one or more detectable features (114) of a user in two or more dimensions [0023]. Detectable features (114) may include, but are not limited to behavioral features (words spoken by a user, voice patterns of user speech, verbal responses, non-verbal responses) [0023]. Fig. 1, capture system (110) analyzes object data for one or more detectable features (114) collected by one or more capture devices, generates user attributes (112) specifying the structural, external, and behavioral characteristics of one or more attributes of a user, and outputs user attributes (112) [0024]. User attributes (112) may include a voice characteristic model [0024]. Differential system (130) receives user attributes (112) and compares user attributes (112) with normalized models (136) to detect the differences and outputs the differences as differential attributes (132) [0027, 0068-0074]. Differential attributes (132) specifies those attributes of a user which distinguish the user from other users [0076]. Capture devices (302) transmit object data (314) to one or more user attributes generator (304) and training controller (310) [0057]. Training controller (310) receives object data (314) and associates object data (314) with a particular behavioral attribute [0058]. Capture system (110) may include training scripts (312) to use in prompting a user to perform actions, speak words, or respond to questions that are applicable to the adjustable characteristics of a base avatar (144) (said conversation style that the user used when the user was healthy) [0062]. Fig. 6, storage system (150) may receive one or more user attributes (112), differential attributes (132), or custom avatar (142) at one time or over multiple points in time (said conversation style that the user used when the user was healthy) [0091]. A regeneration system (610) (said voice and image correction module) stores user attributes or differential attributes for a particular user. At a later point in time, if the user needs to have reconstructive surgery to recreate a user feature as captured and stored at a previous point in time (said conversation style that the user used when the user was healthy), regeneration system (610) adjusts current user attributes with the captured user attributes or differential attributes from a previous point in time to generate recreated user attributes (612) [0092]. Recreated user attributes (612) specify previously captured user attributes such that a user may request an avatar creator system create a custom avatar for the user based on the user’s captured characteristics at a particular point in time (said correcting and restoring) [0092]. Jones does not go into detail about the injury and the implications to the user because of the injury. It would have been obvious before the effective filing date of the claimed invention that the injury could be an open ended list, with open ended implications, such as stammering to the voice. Jones teaches obtaining voice models of the user during different points in time [0024]. Jones further teaches permitting previous user attributes to be applied to the avatar. Therefore, the voice model prior to an injury can be applied to the avatar in order for the user to sound as they were prior to the injury. This would be beneficial to the user if they have an impediment such that when they converse within the chat room, they can feel confident in their speech. RE claim 6, claim 6 recites similar limitations as claim 1 but in process form. Therefore, the same rationale used for claim 1 is applied. RE claim 12, claim 12 recites similar limitations as claim 2 but in process form. Therefore, the same rationale used for claim 2 is applied. RE claim 13, claim 13 recites similar limitations as claim 3 but in process form. Therefore, the same rationale used for claim 3 is applied. Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over JONES et al. (2009/0044113 A1) in view of TAYLOR et al. (2020/0289938 A1) as applied to claims 1 and 6 respectively, and in further view of INOMATA (2018/0341386 A1). RE claim 4, Jones in view of Taylor teaches the limitations of claim 4 with the exception of tracing the user’s pupils to control the avatar. However, Inomata is made of record as teaching the art of defining a virtual space that includes avatars. Specifically, Inomata teaches wherein the avatar action control module is configured to trace user's pupils, to process a position change of the moving pupils as an input signal when the pupils move, and to control an action of the avatar. In regards to Fig. 1 of Inomata, the system (100) includes an HMD (120) with an eye gaze sensor (140) [0041-0043]. The HMD (120) displays a virtual space to the user (5) during operation [0045]. The eye gaze sensor (140) detects a direction in which the lines of sight of the right and left eye of the user (5) are directed [0052]. The eye gaze sensor (140) is implemented by a sensor having the eye tracking function (said is configured to trace user's pupils) [0052, Fig. 5, 0090-0093]. Fig. 12A-B, Inomata teaches an avatar object which is displayed in virtual space (11) [0158-0159]. The avatar is an object associated with the user wearing the HMD (12) [0182]. The processor (210A) translates an operation by the user (5B) in the avatar object (6B) arranged in the virtual space (11A) [0162]. Fig. 13, the processor (210) acquires motion information including direction data and eye tracking data detected by the HMD sensor (410), the eye gaze sensor (140), and the like. As a result, avatar/character information including sound data, controller information, and motion information is acquired [0203]. Fig. 19, the processor (210) controls the motion of each of the avatars (6A, B) based on the motion information (direction data and eye tracking data) included in the avatar/character information on each of the users (5A, B)) [0208]. The processor (210) changes, based on the direction data of each of the users (5A, B), the direction of the head of the corresponding avatar (6A or 6B). The processor causes, based on the eye tracking data of each of the users (5A, 5B), the corresponding avatar (6A, 6B) to blink, and change the line-or-sight direction of that avatar (6A, 6B) (said to control an action of the avatar) [0208]. It would have been obvious before the effective filing date of the claimed invention to include the gaze tracking of Inomata to control the movement of the avatar of Jones in view of Taylor in order for the avatar of Jones to mimic the user in a realistic manner. RE claim 14, claim 14 recites similar limitations as claim 4 but in process form. Therefore, the same rationale used for claim 4 is applied. Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over JONES et al. (2009/0044113 A1) in view of TAYLOR et al. (2020/0289938 A1) as applied to claims 1 and 6 respectively, and in further view of SMITH (2018/0173404 A1). RE claim 5, Jones in view of Taylor teaches the limitations of claim 5 with the exception of controlling an avatar based on calling their name. Smith is relied upon as teaching virtual reality content [abstract]. Specifically, Smith teaches wherein the avatar action control module is configured to, when avatars of other users connect to the metaverse in addition to the avatar of the user and the user calls a name of a specific user, control the avatar of the user to approach an avatar of the called specific user, or request the called specific user to approach the avatar of the user. Smith teaches the user provides input that selects real world objects that will be included in the user experience but also replaces other portions of the real world environment with virtual content [0018]. Fig. 1, environment (100) includes a user (1010 that uses an HMD (104) [0037]. The virtual reality content selection engine (114) provides an interface for a user to select virtual reality content to be included in the user experience [0040]. The method/system of Smith can also facilitate interactions with colleagues in remote locations. When the user says the co-workers name, the user device establishes a connection with the co-worker for a video conference (said user calls a name of a specific user, request the called specific user to approach the avatar of the user) [0069]. It would have been obvious before the effective filing date of the claimed invention to provide a call function as taught by Smith with the chat room function of Jones in view of Taylor as a quick implementation to request a user. RE claim 15, claim 15 recites similar limitations as claim 5 but in process form. Therefore, the same rationale used for claim 5 is applied. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to MICHELLE L SAMS: direct telephone number: (571) 272-7661 email: michelle.sams@uspto.gov The examiner is currently part time and can be reached Mon.-Fri. 5:30am-9:30am. Examiner interviews are available via telephone and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee M. Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MICHELLE L SAMS/ Primary Examiner, Art Unit 2611 27 January 2026
Read full office action

Prosecution Timeline

Nov 14, 2022
Application Filed
Feb 07, 2025
Non-Final Rejection — §103, §112
May 07, 2025
Response Filed
May 28, 2025
Final Rejection — §103, §112
Jul 08, 2025
Response after Non-Final Action
Aug 13, 2025
Examiner Interview Summary
Aug 13, 2025
Applicant Interview (Telephonic)
Aug 28, 2025
Request for Continued Examination
Sep 02, 2025
Response after Non-Final Action
Jan 27, 2026
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592009
MEDICAL MONITORING ANALYSIS AND REPLAY INCLUDING INDICIA RESPONSIVE TO LIGHT ATTENUATED BY BODY TISSUE
2y 5m to grant Granted Mar 31, 2026
Patent 12561861
DYNAMIC RESOURCE CONSTRAINT BASED SELECTIVE IMAGE RENDERING
2y 5m to grant Granted Feb 24, 2026
Patent 12555297
PRESENTATION OF TOPIC INFORMATION USING ADAPTATIONS OF A VIRTUAL ENVIRONMENT
2y 5m to grant Granted Feb 17, 2026
Patent 12548536
Image Processing Method Based on Vertical Sychronization Signal and Electronic Device
2y 5m to grant Granted Feb 10, 2026
Patent 12548213
IMAGE INSPECTION SYSTEM
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
76%
Grant Probability
84%
With Interview (+8.4%)
2y 11m
Median Time to Grant
High
PTA Risk
Based on 481 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month