DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments, see the remarks, filed 09/22/2025, with respect to the amended claim(s) 1 and 14 have been fully considered and moot in view of new grounds of rejection by relying on the teachings of Cutler (US 20190320142 A1).
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-7, 10, 14-15, and 18 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 20190306457 A1) in view of Cutler (US 20190320142 A1) (hereafter “Cutler142”).
Regarding claims 1 and 14, Liu discloses a display device (figs. 1, 5, and 11) comprising:
a display panel having a display area where an image is displayed (11 of fig. 1, 105 of fig. 5, [0030] The local translucent display device 11 is used to display remote video information. The remote video information is captured by the remote camera array 14 and transmitted to the local translucent display device 11; [0042] action 1101, receiving remote video information from the remote video communication device 10′; [0043] action 1102, displaying the remote video information on the local translucent display device 11);
a plurality of cameras provided at positions overlapping with the display area in plan view to capture a user opposed to the display device as a subject (a local camera array 12 of fig. 1, 102 of fig. 5, obtain the local user’s eyes image position; [0029] Video information captured by one or more local cameras corresponding to a position of a remote user's face on the local translucent display device 11 may be transmitted to the remote video communication device 10′; [0032] The local camera array 12 may be used to capture local user's video information, which may be transmitted to the remote video communication device 10′; [0033] In one embodiment, the positions of the remote users' images corresponding to the local cameras refer to positions of the remote users' eyes' images, therefore local users and remote users can have a real experience of looking at each other. The local cameras corresponding to the remote users' images are equivalent to the eyes of the remote users. When the remote users' images move, the corresponding local cameras change accordingly. The local cameras in different positions capture different video information. Therefore, the video information seen by the moving remote users may vary); and
a controller selecting one of the plurality of cameras as a camera to capture the subject (101 of fig. 5, [0031] The video capture and processing module 101 may control the plurality of local cameras to work simultaneously, and only select video information captured by one or more local cameras corresponding to the remote user's face image position, and process the video information. Local cameras of the local camera array 12 may also work selectively, and the video capture and processing module 101 may select one or more local cameras corresponding to the remote user's face image position to work, and process the video information captured by the one or more local cameras), based on positions of eyes of a person included in the image displayed in the display area ([0033] When the local translucent display device 11 displays a screen of a plurality of remote users, the plurality of remote users in the screen correspond to a plurality of local cameras. Video information captured by the plurality of local cameras may be simultaneously transmitted to the remote video communication device 10′. When the positions of the remote users' images change, the positions of the plurality of local cameras corresponding to the remote users' images also change. The positions of the remote users' images corresponding to the local cameras refer to positions of the remote users' face images. In one embodiment, the positions of the remote users' images corresponding to the local cameras refer to positions of the remote users' eyes' images, therefore local users and remote users can have a real experience of looking at each other. The local cameras corresponding to the remote users' images are equivalent to the eyes of the remote users. When the remote users' images move, the corresponding local cameras change accordingly. The local cameras in different positions capture different video information. Therefore, the video information seen by the moving remote users may vary; [0034]).
It is noted that Liu is silent about a housing that houses the display panel and the plurality of cameras and has a bottom portion, the plurality of cameras being located between the bottom portion and the display panel.
Cutler142 teaches a housing (100 of fig. 3) that houses the display panel (200 of fig. 3) and the plurality of cameras (300 and 302 of fig. 3) and has a bottom portion (500 of fig. 3), the plurality of cameras (302 of fig. 3) being located between the bottom portion (500 of fig. 3) and the display panel (200 of fig. 3).
Taking the teachings of Liu and Cutler142 together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the housing of Cutler142 into the display device of Liu to provide a compact display device for new and improved ideas for more immersive video conferencing experiences.
Regarding claim 2, Liu and Cutler142 teach the display device of claim 1, Liu further teaches wherein the controller selects a camera located between right and left eyes of the person included in the image displayed in the display area, of the plurality of cameras, as the camera to capture the subject ([0034] the video capture and processing module 101 can also select video information captured by the camera which corresponds to the middle position of between the two eyes).
Regarding claim 3, Liu and Cutler142 teach the display device of claim 2, Liu further teaches wherein the controller selects a camera closest to a midpoint of a straight line connecting the right and left eyes of the person included in the image displayed in the display area, of the plurality of cameras, as the camera to capture the subject ([0034]).
Regarding claim 4, Liu and Cutler142 teach the display device of claim 1, Liu further teaches a communication unit communicating with an external device (10’ of fig. 1, 104 of fig. 5, [0047] When there is a plurality of remote users, the plurality of remote users correspond to the plurality of remote cameras, and the plurality of remote cameras may each capture video information. The plurality of video information is processed by the remote video communication device 10′ to form a remote video information. The local video communication device 10 may receive the remote video information through the communication module 104),
wherein the controller (101 of fig. 5): selects one of the plurality of cameras as the camera to capture the subject, based on positions of eyes of a user of the external device included in the image transmitted from the external device (101 of fig. 5,[0031 and 0032] The plurality of local cameras of the local camera array 12 may also work selectively, the video capture and processing module 101 may select one or more local cameras to work, and the position of the one or more local cameras may correspond to the position of the remote user's face image. The video information of the one or more local cameras may be synthesized into an integrated video to be transmitted to the remote video communication device 10′, [0033 and 0035]); and
transmits the image including the subject and captured by one of the plurality of cameras, to the external device ([0032 and 0033] The plurality of local cameras of the local camera array 12 may also work selectively, the video capture and processing module 101 may select one or more local cameras to work, and the position of the one or more local cameras may correspond to the position of the remote user's face image. The video information of the one or more local cameras may be synthesized into an integrated video to be transmitted to the remote video communication device 10′, [0045] action 1104, selecting local cameras corresponding to the position of remote users' image on the local translucent display device 11 from the local camera array 12, and transmitting local video information captured by the local cameras to the remote video communication device 10′).
Regarding claim 5, Liu and Cutler142 teach the display device of claim 1, Liu further teaches wherein the controller:
specifies a position of a face of the subject, using one of the plurality of cameras ([0033, 0034, and 0052]); and
selects a camera provided at a position intersecting with a virtual line extending perpendicularly from a position between right and left eyes of a face of the specified subject to the display panel, as a camera to capture the subject ([0034] When the remote user's face image becomes larger, the remote user's one eye position corresponds to one camera, and the remote user's the other eye position corresponds to another camera, the video capture and processing module 101 selects video information captured by two cameras corresponding to both eyes and forms an integrated video, or the video capture and processing module 101 can also select video information captured by the camera which corresponds to the middle position of between the two eyes. When the remote user's face image becomes larger, the remote user's one eye position corresponds to a plurality of cameras, the video capture and processing module 101 selects video information captured by one of the plurality of cameras which is closest to the local user's pupil of that eye, [0052]).
Regarding claim 6, Liu and Cutler142 teach the display device of claim 5, further comprising: a communication unit communicating with an external device, wherein the controller transmits an image including the subject and captured by the camera provided at the position intersecting with the virtual line, to the external device ([0031, 0032, 0047]).
Regarding claims 7 and 15, Liu and Cutler142 teach the display device of claim 1, Liu further teaches wherein the plurality of cameras are arranged at regular intervals (12 of fig. 1).
Regarding claims 10 and 18, Liu and Cutler142 teach the display device of claim 1, Liu further teaches wherein the display area includes a first area (112a of fig. 2) and a second area divided in a first direction (112b of fig. 2), and the plurality of cameras are opposed to the first area and are not opposed to the second area (12 and 112b of fig. 2).
Claim(s) 8-9 and 16-17 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 20190306457 A1) in view of Cutler (US 20190320142 A1) (hereafter “Cutler142”) and Cutler (US 20210021785 A1) (hereafter “Cutler785”).
Regarding claims 8 and 16, Liu and Cutler142 teach the display device of claim 1. However, Liu and Cutler142 do not teach wherein the plurality of cameras include a plurality of first cameras arranged at a first density and a plurality of second cameras arranged at a second density lower than the first density.
Cutler785 teaches wherein the plurality of cameras include a plurality of first cameras arranged at a first density and a plurality of second cameras arranged at a second density lower than the first density (610ca and 650 of fig. 6A) [0067] A seventh camera module 610ca is arranged similar to the first camera module 610aa but omits alternating imaging cameras 650 in a checkerboard pattern and as a result has a lower density of imaging cameras 650).
Taking the teachings of Liu, Cutler142, and Cutler785 together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the first density of the first cameras and the second density of the second cameras lower than the first cameras into the cameras of Liu and Cutler142 to provide registration marks and/or positioning features improve the accuracy and/or precision in positioning the camera module.
Regarding claims 9 and 17, Liu, Cutler142, and Cutler785 teach the display device of claim 8, Cutler further teaches wherein the plurality of first cameras are opposed to a central area of the display area (650 of fig. 6A), and the plurality of second cameras are opposed to a surrounding area which surrounds the central area of the display area (610ca of fig. 6A).
Claim(s) 11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 20190306457 A1) in view of Cutler (US 20190320142 A1) (hereafter “Cutler142”) as applied to claim 1, and further in view of Lius et al. (US 20230292578 A1).
Regarding claim 11, Liu and Cutler142 teach the display device of claim 1. However, Liu and Cutler142 do not teach wherein the display panel is a display panel including self-luminous display elements.
Lius teaches wherein the display panel is a display panel including self-luminous display elements ([0044] The structure of the display panel 12 is further detailed in the following description, and the display panel 12 is a self-luminous display panel as an example, but not limited thereto).
Taking the teachings of Liu, Cutler142, and Lius together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the self-luminous display panel of Lius into the display 11 of Liu in view of Cutler142 for thereby improving the brightness of each sub-pixel of the display device. Doing so would improve quality of the image detected by the camera module or increase signal-to-noise ratio of the camera module.
Claim(s) 12-13 and 19 is/are rejected under 35 U.S.C. 103 as being unpatentable over Liu (US 20190306457 A1) in view of Cutler (US 20190320142 A1) (hereafter “Cutler142”) as applied to claim 1 and 14, and further in view Slobodin (US 20210360154 A1).
Regarding claims 12 and 19, Liu and Cutler142 teach the display device of claims 1 and 14. However, Liu and Cutler142 do not teach wherein the display panel includes a plurality of sub- pixels, and the sub-pixels and the plurality of cameras are alternately arranged in plan view.
Slobodin teaches wherein the display panel includes a plurality of sub-pixels (155 of fig. 2, [0077] each light-emitting die 150 includes three light-emitting regions 155 configured to output red, green, and blue light respectively (i.e., RGB sub-pixels)), and the sub-pixels and the plurality of cameras are alternately arranged in plan view (125 and 155 of fig. 2, [0078] Light-emitting regions 155 comprise display pixels of the display system of device 100, and photosensor regions 125 comprise input pixels of the image-capture system of device 100).
Taking the teachings of Liu, Cutler142, and Slobodin together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the sub-pixels of Slobodin into the pixel of Liu and Cutler142 to improve the imaging resolution of the device by preventing field-curvature effects that might otherwise occur at edges of the device's field of view.
Regarding claim 13, Liu and Cutler142 teaches the display device of claim 1, Liu and Cutler142 do not teach wherein the display panel includes a plurality of sub- pixels, the plurality of sub-pixels includes a first sub- pixel and a second sub-pixel adjacent to the first sub- pixel, and one of the plurality of cameras is located between the first sub-pixel and the second sub-pixel in plan view.
Slobodin teaches wherein the display panel includes a plurality of sub- pixels (155 of fig. 2, [0077] each light-emitting die 150 includes three light-emitting regions 155 configured to output red, green, and blue light respectively (i.e., RGB sub-pixels)), the plurality of sub-pixels includes a first sub- pixel and a second sub-pixel adjacent to the first sub- pixel (155 of fig. 2), and one of the plurality of cameras (125 of fig. 2) is located between the first sub-pixel and the second sub-pixel in plan view (155 of fig. 2).
Taking the teachings of Liu, Cutler142, and Slobodin together as a whole, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the sub-pixels of Slobodin into the pixel of Liu in view of Cutler142 to improve the imaging resolution of the device by preventing field-curvature effects that might otherwise occur at edges of the device's field of view.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to TUNG T VO whose telephone number is (571)272-7340. The examiner can normally be reached Monday-Friday 6:30 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Brian Pendleton can be reached at 571-272-7527. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
TUNG T. VO
Primary Examiner
Art Unit 2425
/TUNG T VO/Primary Examiner, Art Unit 2425