Prosecution Insights
Last updated: April 19, 2026
Application No. 18/326,318

ELECTRONIC DEVICE AND METHOD FOR PROVIDING CONVERSATION FUNCTION USING AVATAR

Non-Final OA §103§112
Filed
May 31, 2023
Examiner
SHAH, SUJIT
Art Unit
2624
Tech Center
2600 — Communications
Assignee
Samsung Electronics Co., Ltd.
OA Round
5 (Non-Final)
66%
Grant Probability
Favorable
5-6
OA Rounds
2y 8m
To Grant
77%
With Interview

Examiner Intelligence

Grants 66% — above average
66%
Career Allow Rate
269 granted / 408 resolved
+3.9% vs TC avg
Moderate +11% lift
Without
With
+11.4%
Interview Lift
resolved cases with interview
Typical timeline
2y 8m
Avg Prosecution
37 currently pending
Career history
445
Total Applications
across all art units

Statute-Specific Performance

§101
2.3%
-37.7% vs TC avg
§103
65.4%
+25.4% vs TC avg
§102
12.7%
-27.3% vs TC avg
§112
16.1%
-23.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 408 resolved cases

Office Action

§103 §112
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 10/10/2025 has been entered. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Claims 1-20, 26, 36 are cancelled. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 21, 31 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 21 recites the limitation "the first avatar is displayed in a first area of the first screen, and the other avatars are displayed in a second area of the first screen, the second area being configured to provide a virtual space captured by a virtual camera and used for conversation; wherein the second view comprises either zoom-in view or a zoom-out view of the virtual space including the other avatars displayed in the second area without changing a view of the first avatar displayed in the first area" in lines 13-25. It is not clear whether the first area corresponds to the first area of the first screen or something else. Similarly, it is not clear whether the second area corresponds to the second area of the first screen or different one. Hence the claim is indefinite. Claims 22-25, 27-30, 41 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, for being directly or indirectly dependent on claim 21. Claim 31 recites the limitation "the first avatar is displayed in a first area of the first screen, and the other avatars are displayed in a second area of the first screen, the second area being configured to provide a virtual space captured by a virtual camera and used for conversation; wherein the second view comprises either zoom-in view or a zoom-out view of the virtual space including the other avatars displayed in the second area without changing a view of the first avatar displayed in the first area" in lines 7-17. It is not clear whether the first area corresponds to the first area of the first screen or something else. Similarly, it is not clear whether the second area corresponds to the second area of the first screen or different one. Hence the claim is indefinite. Claims 32-35, 37-40, 42 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, for being directly or indirectly dependent on claim 31. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 21, 31, 41-42 is/are rejected under 35 U.S.C. 103 as being unpatentable over LEE et al (US Pub 2019/0294313) in view of KAWAKAMI et al (US Pub 2022/0232191). With respect to claim 21, LEE discloses an electronic device (fig. 1; client device 106) comprising: at least one camera (fig. 2; discloses client device 200 includes video camera and audio device 222); a display (fig. 1; display screen 128); a communication circuit (fig. 2; communication interface 206); and at least one processor including processing circuitry, (fig. 2; data processing unit 202) individually and/or collectively configured to control an operation of the at least one camera, the display, and/or the communication circuit, (par 0055; discloses the data store 208 may store data for the operations of processes, applications, components, and/or modules stored in computer-readable media 204 and/or executed by data processing unit(s) 202. The data store 208 may also include content data 214, such as the content 150 that includes video, audio, or other content for rendering and display on one or more of the display screens 128) wherein at least one processor, individually and/or collectively, is configured to: control the display to display a first screen including avatars corresponding to a plurality of participants, wherein the first screen includes a first avatar corresponding to a user of the electronic device and other avatars among the avatars corresponding to remaining participants, the first avatar is displayed in a first area of the first screen, and the other avatars are displayed in a second area of the first screen, the second area being configured to provide a virtual space captured by a virtual camera and used for conversation (fig. 3; discloses displaying an image including plurality of avatars that corresponds to plurality of participants; the first avatar display in an first area 312 where other avatars are display in the second area; see par 0059-0060); receive an input while the first screen including the first avatar and the other avatars is displayed; and control the display to display a second screen that switches a first view of the first screen to a second view in response to receiving the input, wherein the second view comprises either zoom-in view or a zoom-out view of the virtual space including the other avatars displayed in the second area without changing a view of the first avatar displayed in the first area, (fig. 7; discloses based on the user touch input, displays a second screen where the second screen is a zoom-in view of the virtual space including the other avatars displayed in the second area without changing a view of the first avatar displayed in the first area 312; see par 0074-0075); LEE doesn’t expressly disclose in response to switching from the first avatar speaking to one or more second avatars speaking, identify the one or more second avatars corresponding to a speaker among the other avatars, and control the display to display a third screen including an image with the one or more second avatars as the speaker, wherein the image included in the third screen is configured such that the second avatar's movement from a side to a center is synchronized with the first avatar's gaze; wherein a third view of the third screen is determined based on a number of the one or more second avatars; In the same field of endeavor, KAWAKAMI discloses content display method and system (see abstract); KAWAKAMI discloses in response to switching from the first avatar speaking to one or more second avatars speaking, identify the one or more second avatars corresponding to a speaker among the other avatars, (par 0097; discloses In response to the detection of the conversation start trigger, the avatar controller 333a changes an arrangement of at least a part of the avatars in the virtual space, and updates the avatar arrangement information (step S33)) and control the display to display a third screen including an image with the one or more second avatars as the speaker, wherein the image included in the third screen is configured such that the second avatar's movement from a side to a center is synchronized with the first avatar's gaze; wherein a third view of the third screen is determined based on a number of the one or more second avatars (par 0103; discloses As illustrated in FIG. 9A, the conversation avatar C, which is a conversation partner of the conversation avatar A, is displayed near the center of the display 15. More specifically, even when the conversation avatar C is not moved from the state in FIG. 6 (the participant C is not performing the operation for moving the avatar C), the conversation avatar C is displayed near the center of the display 15 in response to the conversation start trigger; par 0107; discloses the arrangement positions of the non-conversation avatars B and D may be the same as or different from those in FIG. 6. Even in a different case, it is desirable to consider the arrangement position in FIG. 6, more specifically, the relative positional relationship between the non-conversation avatars B and D. For example, since the non-conversation avatar D is on the right side of the non-conversation avatar B in FIG. 6, the non-conversation avatar D is preferably arranged on the right side of the non-conversation avatar B in FIG. 9A in order to maintain such a positional relationship. In any case, since the importance is lower than that of the conversation avatar C, the non-conversation avatars B and D is only required to be appropriately displayed so as not to be unnatural); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE to incorporate the teachings of KAWAKAMI to detect the speaker avatar among plurality of participants and rearrange the placement of avatars in the virtual space based on the identified speaker in order to make the virtual conversation appear natural to all participants. With respect to claim 31, LEE discloses a method for providing a conversation function using an avatar in an electronic device, (par 0005; discloses Disclosed techniques enable participants of a communication session that is rendered within a mixed reality environment to change their view or perspective in the mixed reality environment) the method comprising: displaying a first screen including avatars corresponding to a plurality of participants wherein the first screen includes a first avatar corresponding to a user of the electronic device and other avatars corresponding to remaining participants, wherein the first avatar is displayed in a first area of the first screen, and the other avatars are displayed in a second area of the first screen, the second area being configured to provide a virtual space captured by a virtual camera and used for a conversation; (fig. 3; discloses displaying an image including plurality of avatars that corresponds to plurality of participants; the first avatar display in an first area 312 where other avatars are display in the second area; see par 0059-0060); receiving an input while the first screen including the first avatar and the other avatars is displayed displaying a second screen that switches a first view of the first screen to a second view in response to receiving the input, wherein the second view is comprises either zoom-in view or a zoom-out view of the virtual space including the other avatars displayed in the second area, without changing a view of the first avatar displayed in the first area; (fig. 7; discloses based on the user touch input, displays a second screen where the second screen is a zoom-in view of the virtual space including the other avatars displayed in the second area without changing a view of the first avatar displayed in the first area 312; see par 0074-0075); LEE doesn’t expressly disclose in response to switching from the first avatar speaking to one or more second avatars speaking, identifying the one or more second avatars corresponding to a speaker among the other avatars; displaying a third screen including an image with the one or more second avatars as the speaker, wherein the image included in the third screen is configured such that the second avatar's movement from a side to a center is synchronized with the first avatar's gaze; wherein a third view of the third screen is determined based on a number of the one or more second avatars; In the same field of endeavor, KAWAKAMI discloses content display method and system (see abstract); KAWAKAMI discloses in response to switching from the first avatar speaking to one or more second avatars speaking, identifying the one or more second avatars corresponding to a speaker among the other avatars, (par 0097; discloses In response to the detection of the conversation start trigger, the avatar controller 333a changes an arrangement of at least a part of the avatars in the virtual space, and updates the avatar arrangement information (step S33)) and displaying a third screen including an image with the one or more second avatars as the speaker, wherein the image included in the third screen is configured such that the second avatar's movement from a side to a center is synchronized with the first avatar's gaze; wherein a third view of the third screen is determined based on a number of the one or more second avatars (par 0103; discloses As illustrated in FIG. 9A, the conversation avatar C, which is a conversation partner of the conversation avatar A, is displayed near the center of the display 15. More specifically, even when the conversation avatar C is not moved from the state in FIG. 6 (the participant C is not performing the operation for moving the avatar C), the conversation avatar C is displayed near the center of the display 15 in response to the conversation start trigger; par 0107; discloses the arrangement positions of the non-conversation avatars B and D may be the same as or different from those in FIG. 6. Even in a different case, it is desirable to consider the arrangement position in FIG. 6, more specifically, the relative positional relationship between the non-conversation avatars B and D. For example, since the non-conversation avatar D is on the right side of the non-conversation avatar B in FIG. 6, the non-conversation avatar D is preferably arranged on the right side of the non-conversation avatar B in FIG. 9A in order to maintain such a positional relationship. In any case, since the importance is lower than that of the conversation avatar C, the non-conversation avatars B and D is only required to be appropriately displayed so as not to be unnatural); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE to incorporate the teachings of KAWAKAMI to detect the speaker avatar among plurality of participants and rearrange the placement of avatars in the virtual space based on the identified speaker in order to make the virtual conversation appear natural to all participants. With respect to claim 41, LEE as modified by KAWAKAMI further discloses wherein at least one processor, individually and/or collectively, is configured to: in response to input of a first swipe pattern, control the display to output the second screen, in which a zoom-in view of at least one avatar determined from the other avatars based on the first swipe pattern is displayed in a third area; and in response to input of a second swipe pattern, control the display to output the second screen, in which a zoom-out view of at least one avatar determined from the other avatars based on the second swipe pattern is displayed in the third area, wherein the first avatar displayed in a fourth area of the second screen is obtained based on an image of the user captured by the at least one camera wherein the first swipe pattern or the second swipe pattern is input by manual operation of the user (LEE; fig. 7; par 0074; discloses FIG. 7 illustrates that the participant associated with the ongoing communication session 104 is using their computing device 106 to cause the video card 304 to appear in a zoomed in position within the graphical environment 310. Therefore, a point of view of the video card 304 has changed. In a similar fashion, the video card 304 may be made to zoom out within the graphical environment 310; par 0075; discloses to cause the video card 304 to appear in a zoomed in or a zoomed out position within graphical environment 310, the participant places at least two fingers 402 or other conductive surfaces on the screen 128 of the computing device 106. The participant may then slide the at least two fingers 402, or other conductive surfaces, in an outward motion on the screen 128 of the computing device 106 to cause the video card 304 to zoom in or enlarge (as illustrated). Similarly, the participant may slide the at least two fingers, or other conductive surfaces, in an inward motion on the screen 128 of the computing device 106 to cause the video card 304 to zoom out or shift to a smaller rendering. The participant may disengage the at least two fingers 402 or other conductive surfaces on the screen 128 of the computing device 106 to fix a desired zoomed in or zoomed out rendering of the video card 304 within the environment 310; see par 0033). With respect to claim 42, LEE as modified by KAWAKAMI further discloses wherein displaying the second screen includes: in response to input of a first swipe pattern, outputting the second screen, in which a zoom-in view of at least one avatar determined from the other avatars based on the first swipe pattern is displayed in a third area; and in response to input of a second swipe pattern, outputting the second screen, in which a zoom-out view of at least one avatar determined from the other avatars based on the second swipe pattern is displayed in the third area, wherein the first avatar displayed in a fourth area of the second screen is obtained based on an image of the user captured by the at least one camera wherein the first swipe pattern or the second swipe pattern is input by manual operation of the user (LEE; fig. 7; par 0074; discloses FIG. 7 illustrates that the participant associated with the ongoing communication session 104 is using their computing device 106 to cause the video card 304 to appear in a zoomed in position within the graphical environment 310. Therefore, a point of view of the video card 304 has changed. In a similar fashion, the video card 304 may be made to zoom out within the graphical environment 310; par 0075; discloses to cause the video card 304 to appear in a zoomed in or a zoomed out position within graphical environment 310, the participant places at least two fingers 402 or other conductive surfaces on the screen 128 of the computing device 106. The participant may then slide the at least two fingers 402, or other conductive surfaces, in an outward motion on the screen 128 of the computing device 106 to cause the video card 304 to zoom in or enlarge (as illustrated). Similarly, the participant may slide the at least two fingers, or other conductive surfaces, in an inward motion on the screen 128 of the computing device 106 to cause the video card 304 to zoom out or shift to a smaller rendering. The participant may disengage the at least two fingers 402 or other conductive surfaces on the screen 128 of the computing device 106 to fix a desired zoomed in or zoomed out rendering of the video card 304 within the environment 310; see par 0033). Claim(s) 22-25, 27, 29, 32-35, 38, 39 is/are rejected under 35 U.S.C. 103 as being unpatentable over LEE et al (US Pub 2019/0294313) in view of KAWAKAMI et al (US Pub 2022/0232191) and Funazukuri et al (US Pub 2022/0277528). With respect to claim 22, LEE as modified by KAWAKAMI further discloses wherein at least one processor, individually and/or collectively, is configured to: identify the one or more second avatars among the other avatars (par 0097; discloses In response to the detection of the conversation start trigger, the avatar controller 333a changes an arrangement of at least a part of the avatars in the virtual space, and updates the avatar arrangement information (step S33); determine an angle of the third screen based on a position of the one or more second avatars in the first screen; and control the display to display the third screen including at least the first avatar and the one or more second avatars at the determined angle, (par 0111; discloses As illustrated in FIG. 9C, the conversation avatars A and C are displayed near the center of the display 23 from the viewpoint of the virtual camera. This means that, in step S33 of FIG. 8, the conversation avatars A and C for the viewer terminal 2 (for the virtual camera) have been changed so as to approach the position of the virtual camera); LEE as modified by KAWAKAMI don’t expressly disclose wherein, in the third screen, gaze of the first avatar is directed at the one or more second avatars, and positions of the other avatars are changed corresponding to the gaze of the first avatar; In the same field of endeavor, Funazukuri discloses virtual space sharing system and capable of outputting virtual image in which plurality of user share a virtual space (see abstract); Funazukuri discloses wherein, in the third screen, gaze of the first avatar is directed at the one or more second avatars, (see fig. 6; discloses first and second avatars; par 0133; discloses When a meeting is held with other users, the image generation unit 9 may generate a virtual image so that an avatar of another user who has a conversation with a user present in the space is placed in front of the user. Par 0135; discloses when the first user U11 present in the first space S11 speaks to the avatar A12 of the second user U12 projected onto the screen 19 of the first space S11, the image generation unit 9 generates a virtual image in which the avatar A12 of the second user U12 is placed substantially in front of the first user U11 in the meeting space as shown in FIG. 6) and positions of the other avatars are changed corresponding to the gaze of the first avatar; (par 0137; discloses when there are a plurality of other users participating in the meeting, avatars of the other users who are not having a conversation may be placed so as to avoid avatars of the other users who are having a conversation); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI to incorporate the teachings of Funazukuri to include a camera to detect participants motion such that motion of participants may be reflected though the avatars and display the conversation participants at the center and facing each other such that the displayed avatars reflects real participants having conversation, hence facilitate good communication between users. With respect to claim 23, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose wherein at least one processor, individually and/or collectively, is configured to: control the display to display an utterance indicator indicating an utterance state of the one or more second avatar on the third screen; KAWAKAMI further discloses wherein at least one processor, individually and/or collectively, is configured to: control the display to display an utterance indicator indicating an utterance state of the one or more second avatar on the third screen (see fig. 9A; par 0108; discloses an utterance of the avatar may be displayed in a balloon); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI and Funazukuri to incorporate the teachings of KAWAKAMI to display a user interface presenting the utterance of the speaker in order to allow other participants to easily recognize the speaker. With respect to claim 24, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose wherein at least one processor, individually and/or collectively, is configured to: control the display to display the one or more second avatar to represent at least one other participant in a gazing direction of the first avatar representing the user of the electronic device, around the first avatar in the third screen; KAWAKAMI further discloses wherein at least one processor, individually and/or collectively, is configured to: control the display to display the one or more second avatar to represent at least one other participant in a gazing direction of the first avatar representing the user of the electronic device, around the first avatar in the third screen (KAWAKAMI; fig. 9B; discloses a second avatar B is displayed corresponding to participant B around the avatar A in the second screen and in the gazing direction of the avatar A); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI and Funazukuri to incorporate the teachings of KAWAKAMI to display the other participants around the speaker with gaze directed towards the speaker in order to make the virtual collaboration appear realistic by allowing speaker to acknowledge other participates are listening and paying attention to the speaker. With respect to claim 25, LEE as modified by KAWAKAMI and Funazukuri further discloses wherein at least one processor, individually and/or collectively, is configured to: change at least one of angles, a field-of-view, or a focal position of the virtual camera to be considered to display the first avatar and/or the second avatar in the third screen, considering a number of the plurality of participants (KAWAKAMA; par 0076; discloses in the virtual space, a plurality of virtual cameras having different installation positions or different zooms may be set, and it may be possible to switch which virtual space is displayed as viewed from which virtual camera. Moreover, the entire virtual space may be displayed, or only a part thereof may be displayed). With respect to claim 27, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose further wherein at least one processor, individually and/or collectively, is configured to: identify the speaker among the plurality of participants based on data collected from an external electronic device via the communication circuit; KAWAKAMI further discloses wherein at least one processor, individually and/or collectively, is configured to: identify the speaker among the plurality of participants based on data collected from an external electronic device via the communication circuit (KAWAKAMI; par 0060; discloses the controller 33 includes an avatar arrangement acquisitor 331, a trigger detector 332, a display control data distributor 333; par 0062; discloses The trigger detector 332 detects that a trigger is generated by the trigger generator 162 of the participant terminals 1a to 1d. Here, an avatar (hereinafter referred to as a “conversation avatar”) participating in a conversation is associated with the conversation start trigger; par 0047; discloses the avatar controller 161 generates data for causing the avatar to perform an activity in the virtual space according to an operation by the participant A on the operation input module 13. The activity here includes an operation such as a movement, a conversation, or the like. This data is transmitted from the communicator 11 to the content distribution server 3. Note that this data may be transmitted from the communicator 11 to the other participant terminals 1b to 1d or the viewer terminal 2; see par 0048 as well); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI and Funazukuri to incorporate the teachings of KAWAKAMI to detect the speaker based on the data associated with the activity performed by the user such as start of conversation in order to accurately and effectively determine the speaker among plurality of participants. With respect to claim 29, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose wherein at least one processor, individually and/or collectively, is configured to: determine an angle of the third screen in response to a swipe gesture; and control the display to display the third screen at the determined angle; (LEE; see fig. 6; par 0072; discloses to cause the video card 304 to change position within the graphical environment 310, the participant places at least two fingers 402 or other conductive surfaces on the screen 128 of the computing device 106. The participant may then move (i.e., gesture action) the computing device 106 to the right (shown), left, up, down or diagonally to similarly cause the video card 304 to shift in a desired direction. The participant may release or remove the at least two fingers 402 or other two conductive surfaces from the screen 128 to lock the video card 304 at the desired position); With respect to claim 32, LEE as modified by KAWAKAMI further discloses wherein displaying the second screen includes: identifying the one or more second avatars among the other avatars (par 0097; discloses In response to the detection of the conversation start trigger, the avatar controller 333a changes an arrangement of at least a part of the avatars in the virtual space, and updates the avatar arrangement information (step S33); determining an angle of the third screen based on a position of the one or more second avatars in the first screen; and displaying the third screen including at least the first avatar and the one or more second avatars at the determined angle, (par 0111; discloses As illustrated in FIG. 9C, the conversation avatars A and C are displayed near the center of the display 23 from the viewpoint of the virtual camera. This means that, in step S33 of FIG. 8, the conversation avatars A and C for the viewer terminal 2 (for the virtual camera) have been changed so as to approach the position of the virtual camera); LEE as modified by KAWAKAMI don’t expressly disclose wherein, in the third screen, gaze of the first avatar is directed at the one or more second avatars, and positions of the other avatars are changed corresponding to the gaze of the first avatar; In the same field of endeavor, Funazukuri discloses virtual space sharing system and capable of outputting virtual image in which plurality of user share a virtual space (see abstract); Funazukuri discloses wherein, in the third screen, gaze of the first avatar is directed at the one or more second avatars, (see fig. 6; discloses first and second avatars; par 0133; discloses When a meeting is held with other users, the image generation unit 9 may generate a virtual image so that an avatar of another user who has a conversation with a user present in the space is placed in front of the user. Par 0135; discloses when the first user U11 present in the first space S11 speaks to the avatar A12 of the second user U12 projected onto the screen 19 of the first space S11, the image generation unit 9 generates a virtual image in which the avatar A12 of the second user U12 is placed substantially in front of the first user U11 in the meeting space as shown in FIG. 6) and positions of the other avatars are changed corresponding to the gaze of the first avatar; (par 0137; discloses when there are a plurality of other users participating in the meeting, avatars of the other users who are not having a conversation may be placed so as to avoid avatars of the other users who are having a conversation); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI to incorporate the teachings of Funazukuri to include a camera to detect participants motion such that motion of participants may be reflected though the avatars and display the conversation participants at the center and facing each other such that the displayed avatars reflects real participants having conversation, hence facilitate good communication between users. With respect to claim 33, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose further comprising: displaying an utterance indicator indicating an utterance state of the one or more second avatar on the third screen; KAWAKAMI further discloses displaying an utterance indicator indicating an utterance state of the one or more second avatar on the third screen (see fig. 9A; par 0108; discloses an utterance of the avatar may be displayed in a balloon); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI and Funazukuri to incorporate the teachings of KAWAKAMI to display a user interface presenting the utterance of the speaker in order to allow other participants to easily recognize the speaker. With respect to claim 34, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose wherein displaying the second screen includes displaying the one or more second avatar to represent at least one other participant in a gazing direction of the first avatar representing the user of the electronic device, around the first avatar in the third screen; KAWAKAMI further discloses wherein displaying the second screen includes displaying the one or more second avatar to represent at least one other participant in a gazing direction of the first avatar representing the user of the electronic device, around the first avatar in the third screen (KAWAKAMI; fig. 9B; discloses a second avatar B is displayed corresponding to participant B around the avatar A in the second screen and in the gazing direction of the avatar A); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI and Funazukuri to incorporate the teachings of KAWAKAMI to display the other participants around the speaker with gaze directed towards the speaker in order to make the virtual collaboration appear realistic by allowing speaker to acknowledge other participates are listening and paying attention to the speaker. With respect to claim 35, LEE as modified by KAWAKAMI and Funazukuri further discloses wherein displaying the second avatar includes change at least one of angles, a field-of-view, or a focal position of the virtual camera to be considered to display the first avatar and/or the second avatar in the third screen, considering a number of the plurality of participants (KAWAKAMA; par 0076; discloses in the virtual space, a plurality of virtual cameras having different installation positions or different zooms may be set, and it may be possible to switch which virtual space is displayed as viewed from which virtual camera. Moreover, the entire virtual space may be displayed, or only a part thereof may be displayed). With respect to claim 37, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose wherein identifying the avatar corresponding to the speaker includes identify the speaker among the plurality of participants based on data collected from an external electronic device via the communication circuit; KAWAKAMI further discloses wherein identifying the avatar corresponding to the speaker includes identify the speaker among the plurality of participants based on data collected from an external electronic device via the communication circuit (KAWAKAMI; par 0060; discloses the controller 33 includes an avatar arrangement acquisitor 331, a trigger detector 332, a display control data distributor 333; par 0062; discloses The trigger detector 332 detects that a trigger is generated by the trigger generator 162 of the participant terminals 1a to 1d. Here, an avatar (hereinafter referred to as a “conversation avatar”) participating in a conversation is associated with the conversation start trigger; par 0047; discloses the avatar controller 161 generates data for causing the avatar to perform an activity in the virtual space according to an operation by the participant A on the operation input module 13. The activity here includes an operation such as a movement, a conversation, or the like. This data is transmitted from the communicator 11 to the content distribution server 3. Note that this data may be transmitted from the communicator 11 to the other participant terminals 1b to 1d or the viewer terminal 2; see par 0048 as well); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI and Funazukuri to incorporate the teachings of KAWAKAMI to detect the speaker based on the data associated with the activity performed by the user such as start of conversation in order to accurately and effectively determine the speaker among plurality of participants. With respect to claim 39, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose wherein displaying the second screen includes: determining an angle of the third screen in response to a swipe gesture; and control the display to display the third screen at the determined angle; (LEE; see fig. 6; par 0072; discloses to cause the video card 304 to change position within the graphical environment 310, the participant places at least two fingers 402 or other conductive surfaces on the screen 128 of the computing device 106. The participant may then move (i.e., gesture action) the computing device 106 to the right (shown), left, up, down or diagonally to similarly cause the video card 304 to shift in a desired direction. The participant may release or remove the at least two fingers 402 or other two conductive surfaces from the screen 128 to lock the video card 304 at the desired position); Claim(s) 28, 38 is/are rejected under 35 U.S.C. 103 as being unpatentable over LEE et al (US Pub 2019/0294313) in view of KAWAKAMI et al (US Pub 2022/0232191), Funazukuri et al (US Pub 2022/0277528) and Jang et al (US Pub 2012/0316876). With respect to claim 28, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose wherein at least one processor, individually and/or collectively, is configured to: obtain a voice characteristic for each participant based on voice data included in data collected from an external electronic device via the communication circuit; determine the speaker using information about a reference voice characteristic and the obtained voice characteristic; In the same field of endeavor, Jang discloses system and method for voice recognition wherein at least one processor, individually and/or collectively, is configured to: obtain a voice characteristic for each participant based on voice data included in data collected from an external electronic device via the communication circuit; determine the speaker using information about a reference voice characteristic and the obtained voice characteristic; (par 0118; discloses the memory 160 can store a reference voice pattern of each speaker. The controller 180 can extract a feature vector from a voice signal generated by a speaker; calculates a probability value between the extracted feature vector and at least one speaker model pre-stored in a database; and carry out speaker identification determining whether the speaker is the one registered in the database based on the calculated probability value or speaker verification determining whether the speaker's access has been made in a proper way; par 0121; discloses the controller 180 can display a speaker indicator for identifying the first and the second speaker in addition to the first and the second avatar); Therefore, it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI and Funazukuri to incorporate the teachings of Jang to compare the specker voice to a reference voice pattern stored in the memory in order to effectively and accurately identify the speaker among plurality of speakers. With respect to claim 38, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose wherein identifying the avatar corresponding to the speaker includes: obtaining a voice characteristic for each participant based on voice data included in data collected from an external electronic device via the communication circuit; determine the speaker using information about a reference voice characteristic and the obtained voice characteristic; In the same field of endeavor, Jang discloses system and method for voice recognition wherein identifying the avatar corresponding to the speaker includes: obtaining a voice characteristic for each participant based on voice data included in data collected from an external electronic device via the communication circuit; determine the speaker using information about a reference voice characteristic and the obtained voice characteristic; (par 0118; discloses the memory 160 can store a reference voice pattern of each speaker. The controller 180 can extract a feature vector from a voice signal generated by a speaker; calculates a probability value between the extracted feature vector and at least one speaker model pre-stored in a database; and carry out speaker identification determining whether the speaker is the one registered in the database based on the calculated probability value or speaker verification determining whether the speaker's access has been made in a proper way; par 0121; discloses the controller 180 can display a speaker indicator for identifying the first and the second speaker in addition to the first and the second avatar); Therefore, it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI and Funazukuri to incorporate the teachings of Jang to compare the specker voice to a reference voice pattern stored in the memory in order to effectively and accurately identify the speaker among plurality of speakers. Claim(s) 30, 40 is/are rejected under 35 U.S.C. 103 as being unpatentable over LEE et al (US Pub 2019/0294313) in view of KAWAKAMI et al (US Pub 2022/0232191), Funazukuri et al (US Pub 2022/0277528), PRASAD (US Pub 2023/0239168) and Kitada et al (US Pub 2017/0132518). With respect to claim 30, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose wherein at least one processor, individually and/or collectively, is configured to: based on detection of a touch on an avatar included in the third screen through a touch panel, control the communication circuit to transmit a signal to invoke an external electronic device corresponding to the touched avatar; extract images corresponding to a conversation between the plurality of participants conducted in the virtual space; and generate and store a conversation record image using the extracted images; In the same field of endeavor, Prasad discloses system and method for collaboration (see abstract); Prasad discloses wherein at least one processor, individually and/or collectively, is configured to: based on detection of a touch on an avatar included in the third screen through a touch panel, control the communication circuit to transmit a signal to invoke an external electronic device corresponding to the touched avatar (par 0122; discloses At block 1640, the participant's client device 340a receives a selection of a participant in the virtual expo. A participant may select another participant by using a mouse to move a cursor over an avatar corresponding to the other participant or by touching a location on a touchscreen corresponding to the other participant. Par 0123; discloses At block 1650, after selecting the participant, the participant's client device 340a may receive information from the video conference provider 310 about the selected participant, such as from the participant's profile, including their name, job title, employer, and interests; see par 0029, 0031, 0044); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI and Funazukuri to obtain information about a specific participants using touch inputs as disclosed by Prasad in order to allow other participant to easily see relevant information about other participants during the collaboration; LEE as modified by KAWAKAMI, Funazukuri and Prasad don’t expressly disclose extract images corresponding to a conversation between the plurality of participants conducted in the virtual space; and generate and store a conversation record image using the extracted images; In the same field of endeavor, Kitada discloses system and method for intelligence electronic meeting (see abstract); Kitada discloses extracting images corresponding to a conversation between the plurality of participants conducted in the virtual space and generating and storing a conversation record image using the extracted images; (par 0078; discloses meeting intelligence apparatus 102 may analyze stored meeting content data and generate a report based on analyzed meeting content data. Alternatively, meeting intelligence apparatus 102 may analyze meeting content data during electronic meeting 100 and may generate, after electronic meeting 100 ends, a report based on analyzed meeting content data. The report may be any of a number of documents, such as a meeting agenda, a meeting summary, a meeting transcript, a meeting participant analysis, a slideshow presentation, etc.); Therefore, it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI, Funazukuri and Prasad to incorporate the teachings of Kitada to generate and store report based on the contents of the meeting such that a record of the session is maintained for future references. With respect to claim 40, LEE as modified by KAWAKAMI and Funazukuri don’t expressly disclose further comprising: based on detection of a touch on an avatar included in the third screen through a touch panel, control the communication circuit to transmit a signal to invoke an external electronic device corresponding to the touched avatar; extract images corresponding to a conversation between the plurality of participants conducted in the virtual space; and generate and store a conversation record image using the extracted images; In the same field of endeavor, Prasad discloses system and method for collaboration (see abstract); Prasad discloses further comprising: based on detection of a touch on an avatar included in the third screen through a touch panel, control the communication circuit to transmit a signal to invoke an external electronic device corresponding to the touched avatar (par 0122; discloses At block 1640, the participant's client device 340a receives a selection of a participant in the virtual expo. A participant may select another participant by using a mouse to move a cursor over an avatar corresponding to the other participant or by touching a location on a touchscreen corresponding to the other participant. Par 0123; discloses At block 1650, after selecting the participant, the participant's client device 340a may receive information from the video conference provider 310 about the selected participant, such as from the participant's profile, including their name, job title, employer, and interests; see par 0029, 0031, 0044); Therefore it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI and Funazukuri to obtain information about a specific participants using touch inputs as disclosed by Prasad in order to allow other participant to easily see relevant information about other participants during the collaboration; LEE as modified by KAWAKAMI, Funazukuri and Prasad don’t expressly disclose extract images corresponding to a conversation between the plurality of participants conducted in the virtual space; and generate and store a conversation record image using the extracted images; In the same field of endeavor, Kitada discloses system and method for intelligence electronic meeting (see abstract); Kitada discloses extracting images corresponding to a conversation between the plurality of participants conducted in the virtual space and generating and storing a conversation record image using the extracted images; (par 0078; discloses meeting intelligence apparatus 102 may analyze stored meeting content data and generate a report based on analyzed meeting content data. Alternatively, meeting intelligence apparatus 102 may analyze meeting content data during electronic meeting 100 and may generate, after electronic meeting 100 ends, a report based on analyzed meeting content data. The report may be any of a number of documents, such as a meeting agenda, a meeting summary, a meeting transcript, a meeting participant analysis, a slideshow presentation, etc.); Therefore, it would have been obvious to one having ordinary skill in the art to modify the invention disclosed by LEE as modified by KAWAKAMI, Funazukuri and Prasad to incorporate the teachings of Kitada to generate and store report based on the contents of the meeting such that a record of the session is maintained for future references. Response to Arguments Applicant’s arguments with respect to claim(s) 21, 31 have been considered but are moot because the arguments do not apply to new reference being used in the current rejection. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to SU
Read full office action

Prosecution Timeline

May 31, 2023
Application Filed
Mar 05, 2024
Non-Final Rejection — §103, §112
May 30, 2024
Applicant Interview (Telephonic)
May 30, 2024
Examiner Interview Summary
Jun 06, 2024
Response Filed
Sep 12, 2024
Final Rejection — §103, §112
Dec 09, 2024
Request for Continued Examination
Dec 12, 2024
Response after Non-Final Action
Jan 29, 2025
Non-Final Rejection — §103, §112
Apr 29, 2025
Response Filed
Jul 10, 2025
Final Rejection — §103, §112
Sep 05, 2025
Applicant Interview (Telephonic)
Sep 05, 2025
Examiner Interview Summary
Oct 10, 2025
Request for Continued Examination
Oct 16, 2025
Response after Non-Final Action
Oct 27, 2025
Non-Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12603027
DISPLAY PANEL AND DRIVING METHOD THEREOF, AND DISPLAY APPARATUS
2y 5m to grant Granted Apr 14, 2026
Patent 12596514
CONTROL METHOD AND CONTROL DEVICE
2y 5m to grant Granted Apr 07, 2026
Patent 12592177
FOVEATED DISPLAY BURN-IN STATISTICS AND BURN-IN COMPENSATION SYSTEMS AND METHODS
2y 5m to grant Granted Mar 31, 2026
Patent 12572234
DISPLAY DEVICE AND METHOD OF DRIVING THE SAME
2y 5m to grant Granted Mar 10, 2026
Patent 12567367
Display Device and Pixel Sensing Method Thereof
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
66%
Grant Probability
77%
With Interview (+11.4%)
2y 8m
Median Time to Grant
High
PTA Risk
Based on 408 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month