DETAILED ACTION
. Response to Arguments
Applicant’s arguments, see pg. 7-8, filed 10/8/2025, with respect to the rejection(s) of claim(s) 1-20 under 35 U.S.C 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of prior reference Chen. See the rejections below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-9 and 11-19 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Pub. 2019/0110023 A1 to Sakai et al (“Sakai”) in view of US Patent Pub. 2023/0281885 A1 to Park et al (“Park”), and further in view of US Patent No. 8,625,844 B2 to Chen.
As to claim 1, Sakai discloses an image generation method to be executed by a first terminal apparatus and a second terminal apparatus (See Fig 2 and 5; Sakai discloses a video conversation such that user A is operating a first terminal and user B is operating a second terminal.), the image generation method comprising:
displaying, by the first terminal apparatus, a model image of a second user of the second terminal apparatus and determining whether a gaze of a first user of the first terminal apparatus is directed toward the displayed model image of the second user (See Fig. 5; ¶ 0072; Sakai discloses determining whether a gaze of user A is directed towards user B.).
Sakai fails to disclose generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is directed toward the displayed model image of the second user, a model image of the first user based on a captured image of the first user received from the first terminal apparatus; and
generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is not directed toward the displayed model image of the second user, a model image of the first user based on data of the first user acquired in advance.
Park discloses generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is directed toward the displayed model image of the second user, a model image of the first user based on a captured image of the first user received from the first terminal apparatus (See abstract, ¶ 0115-0116); and
generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is not directed toward the displayed model image of the second user, a model image of the first user based on data of the first user acquired in advance (¶ 0016).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to have modified Sakai with the teachings of Park of generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is directed toward the displayed model image of the second user, a model image of the first user based on a captured image of the first user received from the first terminal apparatus; and generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is not directed toward the displayed model image of the second user, a model image of the first user based on data of the first user acquired in advance, as suggested by Park thereby similarly using known configurations for generating image of a user during a user conversation based on user gaze.
Sakai in view of Park fails to disclose the second terminal apparatus is configured to adjust a perspective of the model image of the first user based on a distance from a display of the second terminal apparatus to the second user when the model image of the first user is displayed.
Chen discloses a display method comprising adjusting a perspective of a displayed image based on the distance from the display apparatus to a user viewing the displayed image (See claim 1, “scaling the output frame based on a distance between the user and the display such that when the distance is farther, a content in the output frame is enlarged accordingly, wherein when the distance is shorter than a set distance, the content in the output frame is displayed in a reduced size or an original size”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to have modified Sakai in view of Park with the teachings of Chen for adjusting a perspective of a displayed image based on the distance from the display apparatus to a user viewing the displayed image, as suggested by Chen thereby similarly using known configurations for adjusting display image size according a measured distance between a user and a display and enhancing viewing for the user.
As to claim 2, Park discloses further comprising displaying, by the second terminal apparatus, the generated model image of the first user (¶ 0016).
As to claim 3, Park discloses further comprising transmitting, by the first terminal apparatus, the captured image of the first user to the second terminal apparatus in a case in which it is determined that the gaze of the first user is directed toward the displayed model image of the second user (See abstract, ¶ 0115-0116).
As to claim 4, Park discloses wherein in a case in which it is determined by the first terminal apparatus that the gaze of the first user is directed toward the displayed model image of the second user, the model image of the first user generated by the second terminal apparatus is an image in which a displayed face of the first user is facing the second user, who is facing a display of the second terminal apparatus (See abstract).
As to claim 5, Park discloses wherein the generating, by the second terminal apparatus, of the model image of the first user based on the data of the first user acquired in advance comprises generating the model image of the first user based on a captured image of the first user received in advance (¶ 0016, 0183).
As to claim 6, Sakai discloses further comprising transmitting, by the first terminal apparatus, coordinate data of a viewpoint of the first user to the second terminal apparatus in a case in which it is determined that the gaze of the first user is not directed toward the displayed model image of the second user (See Fig. 5; ¶ 0058).
As to claim 7, Park discloses further comprising generating, by the second terminal apparatus, an image including a head of the first user, as the model image of the first user, based on the coordinate data of the viewpoint of the first user received from the first terminal apparatus and the captured image of the first user received in advance (¶ 0123).
As to claim 8 Park discloses wherein the image of the head of the first user is an image in which a displayed face of the first user is facing a direction in which the first user is looking, as viewed from a second user facing a display of the second terminal apparatus (¶ 0123).
As to claim 9, Park discloses wherein the image of the head of the first user is an image of a back of the head or an image of a side of the face of the first user (¶ 0123).
As to claim 11, the same rejection or discussion is used as in the rejection of claim 1.
As to claim 12, the same rejection or discussion is used as in the rejection of claim 2.
As to claim 13, the same rejection or discussion is used as in the rejection of claim 3.
As to claim 14, the same rejection or discussion is used as in the rejection of claim 4.
As to claim 15, the same rejection or discussion is used as in the rejection of claim 5.
As to claim 16, the same rejection or discussion is used as in the rejection of claim 6.
As to claim 17, the same rejection or discussion is used as in the rejection of claim 7.
As to claim 18, the same rejection or discussion is used as in the rejection of claim 8.
As to claim 19, the same rejection or discussion is used as in the rejection of claim 9.
Claim(s) 21 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Pub. 2019/0110023 A1 to Sakai et al (“Sakai”) in view of US Patent Pub. 2023/0281885 A1 to Park et al (“Park”), and further in view of US Patent Pub. 2024/0005622 A1 to Lee et al (“Lee”)
As to claim 21, Sakai discloses an image generation method to be executed by a first terminal apparatus and a second terminal apparatus (See Fig 2 and 5; Sakai discloses a video conversation such that user A is operating a first terminal and user B is operating a second terminal.), the image generation method comprising:
displaying, by the first terminal apparatus, a model image of a second user of the second terminal apparatus and determining whether a gaze of a first user of the first terminal apparatus is directed toward the displayed model image of the second user (See Fig. 5; ¶ 0072; Sakai discloses determining whether a gaze of user A is directed towards user B.).
Sakai fails to disclose generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is directed toward the displayed model image of the second user, a model image of the first user based on a captured image of the first user received from the first terminal apparatus; and
generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is not directed toward the displayed model image of the second user, a model image of the first user based on data of the first user acquired in advance.
Park discloses generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is directed toward the displayed model image of the second user, a model image of the first user based on a captured image of the first user received from the first terminal apparatus (See abstract, ¶ 0115-0116); and
generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is not directed toward the displayed model image of the second user (¶ 0016), a predetermined model image of the first user based on data of the first user acquired in advance (¶ 0123, “The imaging system can retrieve prior image data where the user’s head was turned more to the side, may use this prior image data to identify how the side(s) of the user’s head look, and may use this information about how the side(s) of the user’s head look to generate the modified image data in which the user is depicted with his/her head turned to the side”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to have modified Sakai with the teachings of Park of generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is directed toward the displayed model image of the second user, a model image of the first user based on a captured image of the first user received from the first terminal apparatus; and generating, by the second terminal apparatus, in a case in which it is determined by the first terminal apparatus that the gaze of the first user is not directed toward the displayed model image of the second user, a model image of the first user based on data of the first user acquired in advance, as suggested by Park thereby similarly using known configurations for generating image of a user during a user conversation based on user gaze.
Sakai in view of Park fails to disclose wherein the predetermined model image is free of elements representing eyes of the first user.
Lee discloses wherein the predetermined model image is free of elements representing eyes of the first user (¶ 0024, “back of the user’s head” defining free of elements representing eyes).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to have modified Sakai in view of Park with the teachings of Lee wherein the predetermined model image is free of elements representing eyes of the first user, as suggested by Lee thereby similarly using known configurations which use previously obtained user data for generating images of the user such as side images or back side images of the user’s head.
Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent Pub. 2019/0110023 A1 to Sakai et al (“Sakai”) in view of US Patent Pub. 2023/0281885 A1 to Park et al (“Park”), and further in view of US Patent Pub. 2024/0005622 A1 to Lee et al (“Lee”), further in view of US Patent No. 8,625,844 B2 to Chen.
As to claim 22, Sakai in view of Park and Lee fails to disclose further comprising adjusting, by the second terminal apparatus, a perspective of the model image of the first user based on a distance from a display of the second terminal apparatus to the second user when the model image of the first user is displayed.
Chen discloses a display method comprising adjusting a perspective of a displayed image based on the distance from the display apparatus to a user viewing the displayed image (See claim 1, “scaling the output frame based on a distance between the user and the display such that when the distance is farther, a content in the output frame is enlarged accordingly, wherein when the distance is shorter than a set distance, the content in the output frame is displayed in a reduced size or an original size”).
Before the effective filing date, it would have been obvious to one of ordinary skill in the art to have modified Sakai in view of Park with the teachings of Chen for adjusting a perspective of a displayed image based on the distance from the display apparatus to a user viewing the displayed image, as suggested by Chen thereby similarly using known configurations for adjusting display image size according a measured distance between a user and a display and enhancing viewing for the user.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to NICHOLAS J LEE whose telephone number is (571)270-7354. The examiner can normally be reached Mon-Fri 10-6PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Eason can be reached at 571-270-7230. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/NICHOLAS J LEE/Primary Examiner, Art Unit 2624