Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
2. The action is responsive to the communications filed on 2/3/2026. Claims 1-8, 10-15, 17, 20 are pending in the case. Claims 1, 2, 10 are amended. Claims 1, 10 are independent claims. Claims 1-8, 10-15,17,20 are rejected.
Summary of claims
3. Claims 1-8, 10-15, 17, 20 are pending,
Claims 1, 2, 10, are amended,
Claims 1, 10 are independent claims,
Claims 9, 16, 18, 19 are cancelled,
Claims 1-8, 10-15, 17, 20 are rejected.
Response to Arguments
4. Applicant’s arguments, see Remarks, filed on 2/3/2026, with respect to the rejection(s) of claim(s) 1-8, 10-15, 17, 20 under 103 have been fully considered and are not persuasive in view of new rejection ground(s).
Applicant argued on pages 7-9 that the cited references including Chung, Raskshit, Cheong, Kwon did not teach the amended limitations cited in claim 1, such as, “wherein the location includes a distance between the first housing and the user, the angle between the first housing and the second housing, an angle between the user and the first housing…an angle between the user and the second housing.” Examiner respectfully disagrees and submits that Chung discloses a distance sensor (Chung: [0042]), and the angle sensor that may detect an angle between first housing of the electronic device and a second housing of the electronic device (Chung: Abstract), and detect a folding angle (Chung: [0010]), further, Chung discloses detecting a user’s field of view or angle of view heading toward the electronic device (Chung: [0039]). More specifically, AN is cited to clearly disclose determining a viewing angle and distance information regarding a distance from the flexible display device to the user (AN: [0035], [0063], [0079]).
Applicant argued Rakshit did not teach “displaying the correction content on the first housing and the second portion of the content on the second housing unmodified” and “displaying the correction content on the second housing and the first portion of the content on the first housing unmodified”. Examiner respectfully disagrees and submits that Rakshit teaches in Figs. 2A-2B and [0022]-[0028] that flexion being applied to a flexible display, in which extended content presented in an extended content region of the flexible display is scaled to present a depth dimension, for example, when bending the display, scaling has been applied to the portion of the image being displayed on second content region 202 based on a user’s line of sight, from the user’s perspective, the features in the top portion of the beach scene, i.e., the top of the tree, the sun, and clouds will be scaled more aggressively the father away they are from bend line, the scaling will introduce a depth dimension to the top portion of the image on region 202, that is, the content displayed on the top portion of the device is modified and the content displayed on the low portion of the device is unmodified, similarly, when the low portion of the device is the extended content region, the content displayed on the low portion of the device is modified and the content displayed on the top portion of the device is unmodified.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries set forth in Graham v. John Deere Co., 383 U.S. 1, 148 USPQ 459 (1966), that are applied for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
5. Claims 1-4, 10-13, 20 are rejected under 35 U.S.C. 103 as being unpatentable over Jinkyo Chung et al (US Publication 20200125144 A1, hereinafter Chung), and in view of Sarbajit Rakshit (US Publication 20170017313 A1, hereinafter Rakshit), and in view of Jin-wang, An (US Publication 20180137801 A1, hereinafter AN), and Yu-Sun Cheong et al (US Publication 20180039408 A1, hereinafter Cheong).
As for independent claim 1, Chung discloses: A method for displaying content on an electronic device comprising a first housing and a second housing rotatably connected to the first housing, wherein a first portion of the content corresponds to the first housing and a second portion of the content corresponds to the second housing (Chung: Abstract, a foldable electronic device for controlling a user interface and its operating method; [0009], The electronic device includes a hinge, a first housing including a first surface which faces a first direction and a third surface which faces a second direction facing away the first direction, a second housing including a second surface which faces the first direction and a fourth surface which faces the second direction facing away the first direction, and folding with the first housing based on the hinge, a flexible display extending from the first surface to the second surface), the method comprising: obtaining an angle between the first housing and a second housing (Chung: Abstract, a foldable electronic device for controlling a user interface…detecting an angle between first housing of the electronic device and a second housing of the electronic device using at least one sensor circuit…the second user interface comprising the first object that is displaying in a second area…and the second area may be different from the first area; [0086], the event converter may calculate a folding angle between the first surface and the second surface of the electronic device; Fig. 9, step 903, 911, determine second area); obtaining a location of a user…based on user data obtained by a camera (Chung: [0046], a camera module; [0059], the camera module may capture a still image or moving images, the camera module may include one or more lenses, image sensors, image signal processors, or flashes) …; generating correction content based on the location of the user, wherein the location includes a distance between the first housing and the user (Chung: [0042], a distance sensor), the angle between the first housing and the second housing (Chung: Abstract, a foldable electronic device for controlling a user interface…detecting an angle between first housing of the electronic device and a second housing of the electronic device using at least one sensor circuit…the second user interface comprising the first object that is displaying in a second area…and the second area may be different from the first area; [0086], the event converter may calculate a folding angle between the first surface and the second surface of the electronic device), an angle between the user and the first housing (Chung: [0039], detecting a user’s field of view or angle of view heading toward the electronic device)…wherein the location includes a distance between the second housing and the user (Chung: [0042], a distance sensor), the angle between the first housing and the second housing (Chung: Abstract, a foldable electronic device for controlling a user interface…detecting an angle between first housing of the electronic device and a second housing of the electronic device using at least one sensor circuit…the second user interface comprising the first object that is displaying in a second area…and the second area may be different from the first area; [0086], the event converter may calculate a folding angle between the first surface and the second surface of the electronic device), an angle between the user and the second housing (Chung: [0039], detecting a user’s field of view or angle of view heading toward the electronic device)
Chung discloses displaying content in first area and second area based on a calculated angle, but Chung does not clearly discloses obtaining the user’s line of site based on facial data of the user obtained by a camera, and changing the displayed content based on user’s angle and location in one part of the display, in an analogous art of displaying content in a flexible display, Rakshit discloses: obtaining a location of a user, wherein the location of the user comprises the user’s line of sight, based on facial data of the user obtained by a camera (Rakshit: [0024], scaling has been applied to the portion of the image being displayed on second (extended) content region 202 based on a user's line of sight; [0028], the user's viewing angle is detected, based on detecting location of the user's eyes for instance, and software installed on the computer device uses this in conjunction with the user's touch-input to find the effective touch point in the extended area; [0042], one or more cameras, proximity devices, or other sensors installed in the device, can identify the user's viewing direction and may identify the extended display in the portion being bent/flexed relative to the other portion of the display); …generating correction content based on the location of the user, the angle, and the first portion of the content (Rakshit: [0025]-[0026], based on the user’s viewing angle and scaling applied to the content on extended content region, change the displayed content; [0028], the skew presented by the scaling of the extended content can be tailored to the particular viewing angle of the user using the device so that the desired is effect is presented relative to that user’s perspective, the user’s viewing angle is detected, based on detecting location of the user’s eyes; [0051], identifies a viewing angle of a user viewing the extended content region of the flexible display, and scales the extended content based on that viewing angle); and displaying the correction content on the first housing and the second portion of the content on the second housing unmodified, and … generating correction content based on the location of the user, the angle, and the second portion of the content (Rakshit: [0025]-[0026], based on the user’s viewing angle and scaling applied to the content on extended content region, change the displayed content; [0028], the skew presented by the scaling of the extended content can be tailored to the particular viewing angle of the user using the device so that the desired is effect is presented relative to that user’s perspective, the user’s viewing angle is detected, based on detecting location of the user’s eyes; [0051], identifies a viewing angle of a user viewing the extended content region of the flexible display, and scales the extended content based on that viewing angle); and displaying the correction content on the second housing and the first portion of the content on the first housing unmodified (Rakshit: Figs. 2A-2B and [0022]-[0028] that flexion being applied to a flexible display, in which extended content presented in an extended content region of the flexible display is scaled to present a depth dimension, for example, when bending the display, scaling has been applied to the portion of the image being displayed on second content region 202 based on a user’s line of sight, from the user’s perspective, the features in the top portion of the beach scene, i.e., the top of the tree, the sun, and clouds will be scaled more aggressively the father away they are from bend line, the scaling will introduce a depth dimension to the top portion of the image on region 202, that is, the content displayed on the top portion of the device is modified and the content displayed on the low portion of the device is unmodified, similarly, when the low portion of the device is the extended content region, the content displayed on the low portion of the device is modified and the content displayed on the top portion of the device is unmodified).
Chung and Rakshit are analogous arts because they are in the same field of endeavor, displaying and adjusting content in two different regions in flexible display. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to modify the invention of Chung using the teachings of Rakshit to clearly include tailoring content in the second region based on the detected viewing angle of a user and the location of the user. It would provide Chung’s method with the enhanced capability of displaying the adjusted content based on user’s perspective so user’s experience is improved with the flexible display.
Further, Chung discloses obtaining a location of a user based on data of the user obtained by a camera, but Chung does not clearly discloses obtaining data by a camera with a predetermined interval, in another analogous art of managing displayed data in a flexible display, AN discloses: …based on facial data of the user obtained by a camera with a predetermined interval (AN: [0039], the sensor may include a plurality of gaze-detecting sensors arranged at a predetermined interval to obtain the user gaze information); Chung discloses a distance sensor but Chung does not clearly disclose detecting a distance between the device and the user, AN discloses: wherein the location includes a distance between the first housing and the user (AN: [0035], obtain distance information regarding a distance from the flexible display device to a pupil of the user);
Chung and AN are analogous arts because they are in the same field of endeavor, displaying and adjusting content in two different regions in flexible display. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to modify the invention of Chung using the teachings of AN to clearly include obtaining data with a predetermined interval and obtaining a distance between the device and the user. It would provide Chung’s method with the enhanced capability of displaying the adjusted content based on user’s perspective so user’s experience is improved with the flexible display.
Furthermore, Chung-Rakshit discloses displaying the correction content on one of the housing based on the location of the user, the angle, but does not clearly disclose determining if a user interface to receive user input overlays or separated from other content, Cheong discloses: when a user interface to receive user input overlays other content (Cheong: [0136], Corresponding to a lining input traversing the screen in the state of the second window (e.g., an alpha screen) 607 is overlaid and displayed; [0139], According to an embodiment of the present disclosure, upon detecting the event, a screen ratio of at least two screens constituting the display 460 may be set in response to the detection of the event. For example, whenever the button displayed on the display 460 is selected, at least one of the splitting ratio of the at least two screens constituting the display 460 and the screen display type (e.g., overlay or split) may be varied), that is, the display ratio may be varied based on if receiving user input overlay or separated) … when a user interface to receive user input is separated from other content (Cheong: [0139], According to an embodiment of the present disclosure, upon detecting the event, a screen ratio of at least two screens constituting the display 460 may be set in response to the detection of the event. For example, whenever the button displayed on the display 460 is selected, at least one of the splitting ratio of the at least two screens constituting the display 460 and the screen display type (e.g., overlay or split) may be varied); [0183], as shown in FIG. 17A(a). Here, the second area 1700 may be separated from the first area 1705 with respect to an arbitrary reference axis; that is, the display ratio may be varied based on if receiving user input overlay or separated)
Chung and Cheong are analogous arts because they are in the same field of endeavor, displaying and adjusting content in two different regions in flexible display. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to modify the invention of Chung using the teachings of Cheong to include tailoring content based on if the detected user input is overlay or separated to other content. It would provide Chung’s method with the enhanced capability of displaying the adjusted content based on user’s input so user’s experience is improved with the flexible display.
As for claim 2, Chung-Rakshit discloses: wherein the obtained location of the user comprises a distance between one of the first housing and the second housing that displays the correction content (Chung: [0042], a distance sensor; Rakshit: [0020], the angle between display and mirror; [0028], the skew presented by the scaling of the extended content can be tailored to the particular viewing angle of the user using the device so that the desired is effect is presented relative to that user’s perspective, the user’s viewing angle is detected, based on detecting location of the user’s eyes; [0025] based on the user’s viewing angle and scaling applied to the content on extended content region; [0028], when building the pixel data for presentation of the pixel content on the display, may take into account viewing angle of the user, the skew presented by the scaling of the extended content can be tailored to the particular viewing angle of the user using the device so that the desired is effect is presented relative to that user’s perspective, the user’s viewing angle is detected, based on detecting location of the user’s eyes).
As for claim 3, Chung-Rakshit discloses: determining a location of a first virtual camera associated with the first portion of content corresponding to the first housing and a location of a second virtual camera associated with the second portion of the content corresponding to the second housing (Rakshit: [0028], when building the pixel data for presentation of the pixel content on the display, may take into account viewing angle of the user, the skew presented by the scaling of the extended content can be tailored to the particular viewing angle of the user using the device so that the desired is effect is presented relative to that user’s perspective, the user’s viewing angle is detected, based on detecting location of the user’s eyes).
As for claim 4, Chung-Rakshit discloses: wherein the location of the one of the first virtual camera and the second virtual camera that corresponds to the one of the first housing and the second housing that displays the correction content changed based on the angle between the first housing and the second housing and the location of the user and the other of the first virtual camera and the second virtual camera is stationary (Rakshit: [0028], when building the pixel data for presentation of the pixel content on the display, may take into account viewing angle of the user, the skew presented by the scaling of the extended content can be tailored to the particular viewing angle of the user using the device so that the desired is effect is presented relative to that user’s perspective, the user’s viewing angle is detected, based on detecting location of the user’s eyes; [0042], a view or gaze point tracker, implemented by way of facilities such as one or more cameras, proximity devices. Or other sensors installed in the device, can identify the user’s viewing direction and may identify the extended display in the portion being bent/flexed relative to the other portion of the display).
claim 9, cancelled
As per Claim 10, it recites features that are substantially same as those features claimed by Claim 1, thus the rationales for rejecting Claim 1 are incorporated herein.
As per Claim 11, it recites features that are substantially same as those features claimed by Claim 2, thus the rationales for rejecting Claim 2 are incorporated herein.
As per Claim 12, it recites features that are substantially same as those features claimed by Claim 3, thus the rationales for rejecting Claim 3 are incorporated herein.
As per Claim 13, it recites features that are substantially same as those features claimed by Claim 4, thus the rationales for rejecting Claim 4 are incorporated herein.
As per Claim 20, it recites features that are substantially same as those features claimed by Claim 1, thus the rationales for rejecting Claim 1 are incorporated herein.
6. Claims 5-8, 14, 15, 17 are rejected under 35 U.S.C. 103 as being unpatentable over Chung and Rakshit and AN and Cheong as applied on claim 1, and further in view of Giang-yoon Kwon et al (US Publication 20140055429 A1, hereinafter Kwon).
As for claim 5, Chung discloses adjusting the displayed content based on the determined size of the second display area (Chung [0100]) but does not disclose determining a ratio of correction content, in another analogous art of changing the display object in flexible display, Kwon expressly discloses: determining a size and a ratio of correction based on the location of the one of the first virtual camera and the second virtual camera (Kwon: [0026], detecting a direction of a user’s eyeline, changing a display perspective according to a bending angle according to the detected direction of the user’s eyeline and bend, changing at least one of the first contents and the information related to the first contents to correspond to the display perspective and displaying the same on the second screen and the third screen; [0064], perform scaling on the size of the first contents displayed on the entire screen to be suitable to be divided second screen size, the controller may adjust the resolution of the first contents image, or process the contents image by adjusting only the size to be suitable to the second screen while keeping the ration unchanged; [0070], a display perspective refers to applying perspective (sense of distance and proximity) on a 2 dimensional plane such as a display as if actually seen by eyes, it may be a display method of making the objects displayed to have perspectives at the user’s point of view according to the user’s eyeline location and direction).
Chung and Rakshit and Kwon are analogous arts because they are in the same field of endeavor, displaying and adjusting content in two different regions in flexible display. Therefore, it would have been obvious to one with ordinary skill, in the art before the effective filing date of the claimed invention, to modify the invention of Chung using the teachings of Kwon to clearly include determining a size and ration for the displayed content. It would provide Chung’s method with the enhanced capability of making the objects displayed to have perspectives at the user’s point of view according to the user’s eyeline location and direction as suggested by Kwon ([0070]).
As for claim 6, Chung-Rakshit-Kwon discloses: wherein the determining of the size and the ratio of the correction content comprises determining the size and the ratio of the correction content to allow the first portion and the second portion to be displayed on one virtual plane (Kwon: [0064], perform scaling on the size of the first contents displayed on the entire screen to be suitable to be divided second screen size, the controller may adjust the resolution of the first contents image, or process the contents image by adjusting only the size to be suitable to the second screen while keeping the ration unchanged; [0070], a display perspective refers to applying perspective (sense of distance and proximity) on a 2 dimensional plane such as a display as if actually seen by eyes, it may be a display method of making the objects displayed to have perspectives at the user’s point of view according to the user’s eyeline location and direction).
As for claim 7, Chung-Rakshit-Kwon discloses: moving the one of the first and second virtual camera by an angle corresponding to a sum of the angle between the first housing and the second housing and a vertical angle between the one of the first housing and second housing that displays the correction content and the user when the location of the user is in front of the electronic device (Kwon: [0222], the controller activates the camera to photograph the user and tracks the user’s motion change to perform control operations; please note that detecting user motion change is detecting angle changes and user location changes).
As for claim 8, Chung-Rakshit-Kwon discloses: identifying the user location moved in a place distant from the front of the electronic device in a horizontal direction; moving the one of the first and second virtual camera in a vertical direction by an angle corresponding to a sum of the angle between the first housing and the second housing and a vertical angle between the one of the first housing and the second housing that displays the correction content and the user; and moving the one of the first and second virtual camera in a horizontal direction by an angle corresponding to a horizontal angle between the one of the first housing and the second housing that displays the correction content and the user (Kwon: [0222], the controller activates the camera to photograph the user and tracks the user’s motion change to perform control operations; please note that detecting user motion change is detecting angle changes and user location changes).
As per Claim 14, it recites features that are substantially same as those features claimed by Claim 5, thus the rationales for rejecting Claim 5 are incorporated herein.
As for claim 15, Chung-Rakshit-Kwon discloses: wherein the at least one processor is further configured to execute the one or more instructions to determine a size and a ratio of correction content corresponding to the second portion of the content, based on the location of the second virtual camera (Kwon: [0026], detecting a direction of a user’s eyeline, changing a display perspective according to a bending angle according to the detected direction of the user’s eyeline and bend, changing at least one of the first contents and the information related to the first contents to correspond to the display perspective and displaying the same on the second screen and the third screen; [0064], perform scaling on the size of the first contents displayed on the entire screen to be suitable to be divided second screen size, the controller may adjust the resolution of the first contents image, or process the contents image by adjusting only the size to be suitable to the second screen while keeping the ration unchanged; [0070], a display perspective refers to applying perspective (sense of distance and proximity) on a 2 dimensional plane such as a display as if actually seen by eyes, it may be a display method of making the objects displayed to have perspectives at the user’s point of view according to the user’s eyeline location and direction).
claim 16 Cancelled
As per Claim 17, it recites features that are substantially same as those features claimed by Claim 7, thus the rationales for rejecting Claim 7 are incorporated herein.
Claims 18-19, cancelled
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to Hua Lu whose telephone number is 571-270-1410 and fax number is 571-270-2410. The examiner can normally be reached on Mon-Fri 7:30 am to 5:00 pm EST. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Scott Baderman can be reached on 571-270-3644. The fax phone number for the organization where this application or proceeding is assigned is 703-273-8300.
Information regarding the status of an application may be obtained from the Patent Center. Should you have questions on access to the Patent Center system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/HUA LU/
Primary Examiner, Art Unit 2118