DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged that application is a National Stage application of PCT CN2023/106355. Priority to CN 2022 10911037.4 with a priority date of 29 July 2022 is acknowledged under 35 USC 119(e) and 37 CFR 1.78. Copies of certified papers required by 37 CFR 1.55 have been received. Priority is acknowledged under 35 USC 119(e) and 37 CFR 1.78.
Information Disclosure Statement
The IDS dated 12 September 2024 has been considered and placed in the application file.
Specification - Title
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed.
The following title is suggested: Reducing bandwidth of 3D image transmission using high and low definition areas.
Claim Interpretation
Under MPEP 2143.03, "All words in a claim must be considered in judging the patentability of that claim against the prior art." In re Wilson, 424 F.2d 1382, 1385, 165 USPQ 494, 496 (CCPA 1970). As a general matter, the grammar and ordinary meaning of terms as understood by one having ordinary skill in the art used in a claim will dictate whether, and to what extent, the language limits the claim scope. Language that suggests or makes a feature or step optional but does not require that feature or step does not limit the scope of a claim under the broadest reasonable claim interpretation. In addition, when a claim requires selection of an element from a list of alternatives, the prior art teaches the element if one of the alternatives is taught by the prior art. See, e.g., Fresenius USA, Inc. v. Baxter Int’l, Inc., 582 F.3d 1288, 1298, 92 USPQ2d 1163, 1171 (Fed. Cir. 2009).
Claims 4, 5 and 11 recite “or.” Since “or” is disjunctive, any one of the elements found in the prior art is sufficient to reject the claim. While citations have been provided for completeness and rapid prosecution, only one element is required. Because, on balance, it appears the disjunctive interpretation enjoys the most specification support and for that reason the disjunctive interpretation (one of A, B OR C) is being adopted for the purposes of this Office Action. Applicant’s comments and/or amendments relating to this issue are invited to clarify the claim language and the prosecution history.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1-4, 6, 8-12 and 14-16 (all claims except 5, 7, 13 and 17-20) are rejected under 35 U.S.C. 103 as obvious over US Patent Publication 2023 0091348 A1, (Lee) in view of US Patent Publication 2024 0221125 A1, (Bai et al.).
Claim 1
[AltContent: textbox (Lee Fig. 9A, showing a display of high and low definition areas.)]
PNG
media_image1.png
466
774
media_image1.png
Greyscale
Regarding Claim 1, Lee teaches a video communication method based on three-dimensional display ("a module for communicating with the outside, and the sensing module 4010 may be a module for sensing a motion of the user," paragraph [0300] and "receive a reconstructed first partial image in a plane form and project the reconstructed first partial image to a virtual sphere to output a reconstructed first partial image in a 3D form" paragraph [0134]), comprising:
acquiring, by a first device, information of a first view point of a first user at a first time and sending the information of the first view point to a second device ("The camera module 1780 may capture a still image and a video. According to an embodiment, the camera module 1780 may include one or more lenses, image sensors, image signal processors, or flashes," paragraph [0366] and "the auxiliary processor 1723 ( e.g., an image signal processor or a communication processor) may be implemented as a part of functionally related other components (e.g., the camera module 1780 or the communication module 1790)." paragraph [0355]);
after receiving the information of the first view point, taking, by the second device, first images of a second user through m cameras, determining a first high-definition area and a first low-definition area of each first image according to the information of the first view point, encoding first high-definition areas and first low-definition areas respectively to enable image resolution of the encoded first high-definition areas is higher than that of the encoded first low-definition areas, and sending data of encoded m first images to the first device; wherein, areas around the first view point are the first high-definition areas, and other areas than the first high-definition area are the first low-definition areas; and m is greater than or equal to 2 ("In operation S1270, the edge data network 2000 may determine one area-corresponding filter among a plurality of area-corresponding filters, based on the first focal position information and the second focal position information, and may generate a reduced second partial image by performing filtering on a second partial image corresponding to the second azimuth information," paragraph [00247] where second azimuth teaches m cameras, and corresponding filters teaches high and low definition areas);
decoding, by the first device, the data of the encoded m first images to obtain m second images, and acquiring information of a second view point of the first user at a second time, determining an offset of the second view point relative to the first view point, and determining second high-definition areas and second low-definition areas of the m second images according to the offset; wherein, areas around the second view point are the second high-definition areas, and other areas than the second high-definition areas are the second low-definition areas ("In this case, an area near each split area is a high-definition area, and areas are classified into a medium-definition area and a low-definition area in a direction away from each split area, so that the coefficients of the area-corresponding filters may be determined. In this case, most of the coefficients of high-definition areas may be coefficients greater than or equal to a first value, the coefficients of medium-definition areas may have a smaller number of coefficients greater than or equal to the first value than the coefficients of the high-definition areas, and the coefficients of low-definition areas may have a smaller number of coefficients greater than or equal to the first value than the coefficients of the medium-definition areas," paragraph [0190]); and
determining, by the first device, a target display position of the third three-dimensional model on a display screen according to the information of the second view point, and displaying the third three-dimensional model at the target display position ("The electronic device 1000 may obtain an Fov reconstructed frame 230 by decoding the encoded Fov frame information, and perform rendering on the Fov reconstructed frame, and then the electronic device 1000 may display a rendered Fov frame 240 on a display of the electronic device 1000," paragraph [0112]);
wherein the first device and the second device are three-dimensional display devices ("receive a reconstructed first partial image in a plane form and project the reconstructed first partial image to a virtual sphere to output a reconstructed first partial image in a 3D form" paragraph [0134]).
Lee does not explicitly teach all of three dimensional models.
[AltContent: textbox (Bai et al. Fig. 4b, showing overlapping regions in the display of information.)]
PNG
media_image2.png
366
641
media_image2.png
Greyscale
However, Bai et al. teach obtaining, by the first device, a first three-dimensional model by calculating and rendering the second high-definition areas of the m second images with a first neural network, obtaining a second three-dimensional model by calculating and rendering the second low-definition areas of the m second images with a second neural network, splicing the first three-dimensional model and the second three-dimensional model to obtain a third three-dimensional model; wherein complexity of the first neural network is higher than that of the second neural network ("use the preset algorithm to train the sample data to obtain the preset screen coordinate determination model. For example, the preset algorithm may be a neural network algorithm, such as a deep convolutional neural network algorithm," paragraph [0073]).
Therefore, taking the teachings of Lee and Bai et al. as a whole, it would have been obvious to a person having ordinary skill in the art before the time of the effective filing date of the claimed invention of the instant application to modify “Transmitting Image Content using Edge Computing Device” as taught by Lee to use “Image Processing Method” as taught by Bai et al. The suggestion/motivation for doing so would have been that, “However, the bandwidth transmission capability of electronic device and the rendering capability of the graphics processing unit (GPU)/graphics card are limited. If the image is rendered at full resolution, it often fails to realize the frame rate that matches the refresh rate of the display and also increases the processing pressure of the GPU. Therefore, reducing the processing pressure of the GPU as much as possible while satisfying the requirements of the users for high-definition content has become an urgent issue to be addressed.” as noted by the Bai et al. disclosure in paragraph [0004], which also motivates combination because the combination would predictably have a reduced transmission bandwidth as there is a reasonable expectation that 3D images will use a lot of bandwidth; and/or because doing so merely combines prior art elements according to known methods to yield predictable results.
The rejection of method claim 1 above applies mutatis mutandis to the corresponding limitations of apparatus claim 9 while noting that the rejection above cites to both device and method disclosures. Claim 9 is mapped below for clarity of the record and to specify any new limitations not included in claim 1.
Claim 2
Regarding claim 2, Lee teaches the method according to claim 1, as noted above.
Lee is not relied upon to explicitly teach all of face images.
However, Bai et al. teach wherein,
acquiring, by the first device, the information of the first view point of the first user at the first time comprises:
taking, by the first device, a face image of the first user at the first time through a first camera ("Alternatively, in order to capture face images of the user from a plurality of angles more accurately, each edge of the display of the electronic device may be provided with a camera 210. The one or more cameras 210 may be communicatively connected to the processor 120," paragraph [0050]), performing facial feature point detection on the face image, if detecting a face, performing eye recognition in a face area, and marking a left eye area and a right eye area, performing left pupil recognition in the left eye area, determining a relative position of the left pupil in the left eye area, performing right pupil recognition in the right eye area, determining a relative position of the right pupil in the right eye area, determining an intersection point position of binocular lines of sight of the first user on the display screen of the first device according to the relative position of the left pupil in the left eye area and the relative position of the right pupil in the right eye area, and taking the intersection point position as the first view point of the first user at the first time ("acquire the pupil image of the user through the IR camera. The RGB camera has a high refresh rate and a low resolution, and the IR camera has a high resolution. Therefore, the electronic device may perform face recognition on the image captured by the RGB camera, mark a human eye region, and determine coordinates of the human eye region," paragraph [0053]);
acquiring, by the first device, the information of the second view point of the first user at the second time comprises:
taking, by the first device, a face image of the first user at the second time through the first camera, performing facial feature point detection on the face image, if detecting a face, performing eye recognition in a face area, and marking a left eye area and a right eye area, performing left pupil recognition in the left eye area, determining a relative position of the left pupil in the left eye area, performing right pupil recognition in the right eye area, determining a relative position of the right pupil in the right eye area, determining an intersection point position of binocular lines of sight of the first user on the display screen of the first device according to the relative position of the left pupil in the left eye area and the relative position of the right pupil in the right eye area, and taking the intersection point position as the second view point of the first user at the second time ("In yet another example, the electronic device may be provided with a human eye tracking module. The human eye tracking module may track the movement of human eyes in real time. The human eye tracking module may determine the pupil coordinates of the user based on the human eye tracking algorithm," paragraph [0060] where tracking in real time teaches looking at a second viewpoint of a user at a second time).
Lee and Bai et al. are combined as per claim 1.
Claim 3
Regarding claim 3, Lee teaches the method according to claim 1, wherein, video communication is conducted between the first device and the second device over a remote network ("electronic device 1000 may refer to a terminal, a user equipment (UE), a mobile station, a subscriber station, a remote terminal, a wireless terminal, or a user device," paragraph [0094]).
Claim 4
Regarding claim 4, Lee teaches the method according to claim 1, wherein, encoding the first high-definition areas and the first low-definition areas respectively to enable that the image resolution of the encoded first high-definition areas is higher than that of the encoded first low-definition areas comprises:
keeping a number of pixels in the first high definition areas unchanged ("The fixation region of the stitched image is high-definition content, which satisfies the requirements of the user. In addition, the non-fixation region of the stitched image is low-definition content, which reduces the rendering amount of the GPU," paragraph [0127]);
compressing laterally a number of pixels in the first low definition areas to 1/N of the original number of pixels in the first low definition areas, or compressing vertically a number of pixels in the first low definition areas to 1/N of the original number of pixels in the first low definition areas; wherein N is greater than or equal to 2 ("The electronic device compresses the image of the difference region according to a compression ratio of ¼, and the compressed image of the difference region corresponds to 27 sub-regions in b of FIG. 11. That is, image information of a single sub-region of the second region in b of FIG. 11 may include image data of 4 sub-regions of the difference region in a of FIG. 11 and tags of the 4 sub-regions," paragraph [0134] where compression ratios teaches vertical and horizontal compression).
Claim 6
Regarding claim 6, Lee teaches the method according to claim 4, wherein, decoding, by the first device, the data of the encoded m first images to obtain the m second images comprises: for data of any one of the encoded first images, decoding a first high-definition area and a first low-definition area of the first image and decompressing the low-definition area of the first image, to obtain a second image ("The position information of the image may include tags of a plurality of sub-regions of the display region corresponding to the image. The scaling ratio of the image may refer to a ratio between a compressed size and an actual size of the image. In this way, the electronic device may reduce the data transmission amount of the transmission bandwidth by compressing the image," paragraph [0129] where compressing teaches decompressing).
Claim 8
Regarding claim 8, Lee teaches the method according to claim 1, as noted above.
Lee is not relied upon to explicitly teach all of 3D models.
However, Bai et al. teach wherein,
determining, by the first device, the target display position of the third three-dimensional model on the display screen according to the information of the second view point, and displaying the third three-dimensional model at the target display position, comprises:
according to the information of the second view point, using both left and right virtual cameras to take images of the third three-dimensional model to obtain a left-eye image and a right-eye image, combining the left-eye image and the right-eye image to generate a target picture of the third three-dimensional model, wherein the left-eye image is on a left side of the second view point in the target picture, the right-eye image is on a right side of the second view point in the target picture ("For example, the electronic device may have a virtual camera. For example, the virtual camera may be a software program of the electronic device. The virtual camera may be a set of data parameters of the electronic device. These data parameters may be used to identify the position, orientation, viewing angle, etc. of the rendered image," paragraph [0104]); and
displaying the target picture on the display screen of the first device ("The electronic device may determine the image corresponding to the fixation region according to the viewport of the virtual camera, and render the image to obtain the rendered image of the fixation region," paragraph [0104]).
Lee and Bai et al. are combined as per claim 1.
Claim 9
Regarding claim 9, Lee teaches a video communication system based on three-dimensional display("a module for communicating with the outside, and the sensing module 4010 may be a module for sensing a motion of the user," paragraph [0300] and "receive a reconstructed first partial image in a plane form and project the reconstructed first partial image to a virtual sphere to output a reconstructed first partial image in a 3D form" paragraph [0134]), comprising:
a first device, configured to acquire information of a first view point of a first user at a first time and send the information of the first view point to a second device ("The camera module 1780 may capture a still image and a video. According to an embodiment, the camera module 1780 may include one or more lenses, image sensors, image signal processors, or flashes," paragraph [0366] and "the auxiliary processor 1723 ( e.g., an image signal processor or a communication processor) may be implemented as a part of functionally related other components (e.g., the camera module 1780 or the communication module 1790)." paragraph [0355]), receive data of encoded m first images sent by the second device, decode the data of the encoded m first images to obtain m second images, and acquire information of a second view point of the first user at a second time, determine an offset of the second view point relative to the first view point, determine second high-definition areas and second low-definition areas of the m second images according to the offset; wherein, areas around the second view point are the second high-definition areas, and other areas than the second high-definition areas are the second low-definition areas ("In this case, an area near each split area is a high-definition area, and areas are classified into a medium-definition area and a low-definition area in a direction away from each split area, so that the coefficients of the area-corresponding filters may be determined. In this case, most of the coefficients of high-definition areas may be coefficients greater than or equal to a first value, the coefficients of medium-definition areas may have a smaller number of coefficients greater than or equal to the first value than the coefficients of the high-definition areas, and the coefficients of low-definition areas may have a smaller number of coefficients greater than or equal to the first value than the coefficients of the medium-definition areas," paragraph [0190]);
determine a target display position of the third three-dimensional model on a display screen according to the information of the second view point, and display the third three-dimensional model at the target display position ("The electronic device 1000 may obtain an Fov reconstructed frame 230 by decoding the encoded Fov frame information, and perform rendering on the Fov reconstructed frame, and then the electronic device 1000 may display a rendered Fov frame 240 on a display of the electronic device 1000," paragraph [0112]);
the second device, configured to take first images of a second user through m cameras after receiving the information of the first view point; determine a first high-definition area and a first low-definition area of each first image according to the information of the first view point ("In operation S1270, the edge data network 2000 may determine one area-corresponding filter among a plurality of area-corresponding filters, based on the first focal position information and the second focal position information, and may generate a reduced second partial image by performing filtering on a second partial image corresponding to the second azimuth information," paragraph [00247] where second azimuth teaches m cameras, and corresponding filters teaches high and low definition areas);
encode the first high-definition areas and the first low-definition areas respectively to enable image resolution of the encoded first high-definition areas is higher than that of the first low-definition areas; send the data of the encoded m first images to the first device ("In this case, an area near each split area is a high-definition area, and areas are classified into a medium-definition area and a low-definition area in a direction away from each split area, so that the coefficients of the area-corresponding filters may be determined. In this case, most of the coefficients of high-definition areas may be coefficients greater than or equal to a first value, the coefficients of medium-definition areas may have a smaller number of coefficients greater than or equal to the first value than the coefficients of the high-definition areas, and the coefficients of low-definition areas may have a smaller number of coefficients greater than or equal to the first value than the coefficients of the medium-definition areas," paragraph [0190]);
wherein, areas around the first view point are the first high-definition areas, and other areas than the first high-definition areas are the first low-definition areas; and m is greater than or equal to 2; wherein the first device and the second device are three-dimensional display devices ("receive a reconstructed first partial image in a plane form and project the reconstructed first partial image to a virtual sphere to output a reconstructed first partial image in a 3D form" paragraph [0134]).
Lee is not relied upon to explicitly teach all of three dimensional models
However, Bai et al. teach obtain a first three-dimensional model by calculating and rendering the second high-definition areas of the m second images with a first neural network, obtain a second three-dimensional model by calculating and rendering the second low-definition areas of the m second images with a second neural network, obtain a third three-dimensional model by splicing the first three-dimensional model and the second three-dimensional model, wherein complexity of the first neural network is higher than that of the second neural network ("use the preset algorithm to train the sample data to obtain the preset screen coordinate determination model. For example, the preset algorithm may be a neural network algorithm, such as a deep convolutional neural network algorithm," paragraph [0073]).
Lee and Bai et al. are combined as per claim 1.
Claim 10
Regarding claim 10, Lee teaches the system according to claim 9, as noted above.
Lee is not relied upon to explicitly teach all of face images.
However, Bai et al. teach wherein,
the first device is configured to acquire the information of the first view point of the first user at the first time by the following:
taking a face image of the first user at the first time through a first camera ("Alternatively, in order to capture face images of the user from a plurality of angles more accurately, each edge of the display of the electronic device may be provided with a camera 210. The one or more cameras 210 may be communicatively connected to the processor 120," paragraph [0050]), performing facial feature point detection on the face image, if a face is detected, performing eye recognition in a face area, and marking a left eye area and a right eye area, performing left pupil recognition in the left eye area, determining a relative position of the left pupil in the left eye area, performing right pupil recognition in the right eye area, determining a relative position of the right pupil in the right eye area, determining an intersection point position of binocular lines of sight of the first user on the display screen of the first device according to the relative position of the left pupil in the left eye area and the relative position of the right pupil in the right eye area, and taking the intersection point position as the first view point of the first user at the first time ("acquire the pupil image of the user through the IR camera. The RGB camera has a high refresh rate and a low resolution, and the IR camera has a high resolution. Therefore, the electronic device may perform face recognition on the image captured by the RGB camera, mark a human eye region, and determine coordinates of the human eye region," paragraph [0053]);
the first device is configured to acquire the information of the second view point of the first user at the second time by the following:
taking a face image of the first user at the second time through the first camera, performing facial feature point detection on the face image, if a face is detected, performing eye recognition in the face area, and marking a left eye area and a right eye area, performing left pupil recognition in the left eye area, determining a relative position of the left pupil in the left eye area, performing right pupil recognition in the right eye area, determining a relative position of the right pupil in the right eye area, determining an intersection point position of binocular lines of sight of the first user on the display screen of the first device according to the relative position of the left pupil in the left eye area and the relative position of the right pupil in the right eye area, and taking the intersection point position as the second view point of the first user at the second time ("In yet another example, the electronic device may be provided with a human eye tracking module. The human eye tracking module may track the movement of human eyes in real time. The human eye tracking module may determine the pupil coordinates of the user based on the human eye tracking algorithm," paragraph [0060] where tracking in real time teaches looking at a second viewpoint of a user at a second time).
Lee and Bai et al. are combined as per claim 1.
Claim 11
Regarding claim 11, Lee teaches the system according to claim 9, wherein,
the first device is configured to encode the first high-definition areas and the first low-definition areas respectively to enable the image resolution of the encoded first high-definition areas is higher than that of the encoded first low-definition areas by the following:
keeping a number of pixels in the first high definition areas unchanged ("The fixation region of the stitched image is high-definition content, which satisfies the requirements of the user. In addition, the non-fixation region of the stitched image is low-definition content, which reduces the rendering amount of the GPU," paragraph [0127]);
compressing laterally a number of pixels in the first low definition areas to 1/N of the original number of pixels in the first low definition areas, or compressing vertically a number of pixels in the first low definition areas to 1/N of the original number of pixels vertically; wherein N is greater than or equal to 2 ("The electronic device compresses the image of the difference region according to a compression ratio of ¼, and the compressed image of the difference region corresponds to 27 sub-regions in b of FIG. 11. That is, image information of a single sub-region of the second region in b of FIG. 11 may include image data of 4 sub-regions of the difference region in a of FIG. 11 and tags of the 4 sub-regions," paragraph [0134] where compression ratios teaches vertical and horizontal compression).
Claim 12
Regarding claim 12, Lee teaches the system according to claim 11, wherein,
the first device is configured to decode the data of the encoded m first images to obtain the m second images by the following:
for data of any one of the encoded first images, decoding a first high-definition area and a first low-definition area of the first image and decompressing the low-definition area of the first image, to obtain a second image ("The position information of the image may include tags of a plurality of sub-regions of the display region corresponding to the image. The scaling ratio of the image may refer to a ratio between a compressed size and an actual size of the image. In this way, the electronic device may reduce the data transmission amount of the transmission bandwidth by compressing the image," paragraph [0129] where compressing teaches decompressing).
Claim 14
Regarding claim 14, Lee teaches the system according to claim 9, as noted above.
Lee does not explicitly teach all of three dimensional models.
However, Bai et al. teach wherein,
the first device is configured to determine the target display position of the third three-dimensional model on the display screen according to the information of the second view point, and display the third three-dimensional model at the target display position, by the following:
according to the information of the second view point, using both left and right virtual cameras to take images of the third three-dimensional model to obtain a left-eye image and a right-eye image, combining the left-eye image and the right-eye image to generate a target picture of the third three-dimensional model, wherein the left-eye image is on a left side of the second view point in the target picture, the right-eye image is on a right side of the second view point in the target picture ("For example, the electronic device may have a virtual camera. For example, the virtual camera may be a software program of the electronic device. The virtual camera may be a set of data parameters of the electronic device. These data parameters may be used to identify the position, orientation, viewing angle, etc. of the rendered image," paragraph [0104]); and
displaying the target picture on the display screen of the first device ("The electronic device may determine the image corresponding to the fixation region according to the viewport of the virtual camera, and render the image to obtain the rendered image of the fixation region," paragraph [0104]).
Lee and Bai et al. are combined as per claim 1.
Claim 15
Regarding claim 15, Lee teaches the system according to claim 9, as noted above.
Lee is not relied upon to explicitly teach all of camera locations.
However, Bai et al. teach wherein, the first camera is disposed in the middle of a top border of the display screen of the first device; and
the m cameras are respectively disposed in left and right half areas of a top border, left and right half areas of a bottom border, a middle area of the left border and a middle area of the right border of a display screen of the second device ("Certainly, the one or more cameras 210 may also be arranged in other places of the electronic device, for example, may be arranged on the top or side of the electronic device, which will not be limited. Alternatively, in order to capture face images of the user from a plurality of angles more accurately, each edge of the display of the electronic device may be provided with a camera 210," paragraph [0050]).
Lee and Bai et al. are combined as per claim 1.
Claim 16
Regarding claim 16, Lee teaches the method according to claim 1, as noted above.
Lee is not relied upon to explicitly teach all of camera locations.
However, Bai et al. teach wherein,
the first camera is disposed in the middle of a top border of the display screen of the first device; and
the m cameras are respectively disposed in left and right half areas of a top border, left and right half areas of a bottom border, a middle area of the left border and a middle area of the right border of a display screen of the second device ("Certainly, the one or more cameras 210 may also be arranged in other places of the electronic device, for example, may be arranged on the top or side of the electronic device, which will not be limited. Alternatively, in order to capture face images of the user from a plurality of angles more accurately, each edge of the display of the electronic device may be provided with a camera 210," paragraph [0050]).
Lee and Bai et al. are combined as per claim 1.
Allowable Subject Matter
Claims 5, 7, 13 and 17-20 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
Reference Cited
The prior art made of record and not relied upon is considered pertinent to applicant’s disclosure.
US Patent Publication 2021 0174724 A1 to Li et al. discloses receiving an image to be displayed sent by a graphics processor, a data drive chip controls respective rows of sub-pixels containing a high-definition display area in a connected display panel to be scanned line by line according to a position of the high-definition display area in the image to be displayed, and at the same time, the data drive chip controls respective rows of sub-pixels containing only a low-definition display area in the display panel to be scanned per N rows simultaneously according to a position of the low-definition display area in the image to be displayed, where N is an even number greater than 1.
JP Patent Publication H09-93613 A to Shimizu discloses applying a rough image with reduced image definition in the peripheral area of the human visual field where the resolution is lowered, a higher visual sense can be obtained with a wider visual field within a limited amount of image information. It realizes the panoramic stereoscopic image system. In addition, we realized a system in which these left and right stereoscopic screens are divided into upper and lower parts for display on a normal TV display screen, and these are optically superimposed. [Effect] According to the present invention, in a multimedia era in which images will play a leading role in the future, a panoramic stereoscopic image with a wide and wide field of view, which has not been obtained until now, and has a higher sense of presence, can be obtained with a smaller amount of information.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HEATH E WELLS whose telephone number is (703)756-4696. The examiner can normally be reached Monday-Friday 8:00-4:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ms. Jennifer Mehmood can be reached on 571-272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/Heath E. Wells/Examiner, Art Unit 2664
Date: 20 February 2026