DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statement
The information disclosure statement (IDS) submitted on 10/26/2023, 05/09/2024, 11/19/2024 and 12/19/2025 in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Claim Rejections - 35 USC § 102
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action:
A person shall be entitled to a patent unless –
(a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention.
Claim(s) 1-7, 10-17 and 20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by Lee et al. (US 2019/0187758 A1).
The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2). This rejection under 35 U.S.C. 102(a)(2) might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C. 102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B) if the same invention is not being claimed; or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed in the reference and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement.
Regarding claim 1, Lee et al. (figures 1-3, 6 and 14) disclose an electronic device (flexible device 100, 300, 600, paragraph [0044]) comprising: a first housing (half right part of the electronic device 300 in figures 3A-3C, which has a first (front) surface and a second (rear) surface opposite to the first surface) comprising a first surface facing a first direction, a second surface facing a second direction opposite to the first surface, and a first lateral member surrounding a first space between the first surface and the second surface (paragraphs [0056]-[0060); a second housing (half left part of the electronic device 300 in figures 3A-3C, which has a third (front) surface and a fourth (rear) surface opposite to the third surface) connected to the first housing, and configured to be foldable about a folding axis, the second housing comprising a third surface facing the first direction in an unfolded state, a fourth surface facing the second direction in the unfolded state, and a second lateral member surrounding a second space between the third surface and the fourth surface (paragraphs [0056]-[0060]); a first display (figure 1, display 140; figure 3A-3C, display 310) provided on at least a portion of the first surface and at least a portion of the third surface; a sensor circuit (figure 1, sensor module 120); and a processor (figure 1, processor 110) operatively connected to the first display and the sensor circuit, wherein the processor is configured to: display first information corresponding to a first application in a first area on the first display (paragraph [0051] and [0062], figure 3D, first window 312); display second information corresponding to a second application in a second area on the first display (paragraphs [0052]-[0053] and [0062], figure 3D, second window 314); acquire sensor information through the sensor circuit (paragraph [0054], figure 2, process 208; and paragraph [0078], figure 6D); identify whether a user input is detected on the second surface or the fourth surface based on the acquired sensor information (paragraph [0054], figure 2, process 208; and paragraphs [0076]-[0078], figures 6A-6D); identify, based on the detected user input, a type of the user input and a location of the user input (paragraph [0054], figure 2, process 208; paragraphs [0076]-[0078], figures 6A-6D; and paragraph [0093], figure 14); change a display attribute of at least one of the first information corresponding to the first application and the second information corresponding to the second application, based on the type of the user input and the location of the user input (paragraph [0055], figure 2, process 210; and paragraph [0078], figure 6D); and display at least one of the first information and the second information on the first display, based on the changed display attribute (paragraph [0055], figure 2, process 210; and paragraph [0078], figure 6D).
Regarding claim 2, Lee et al. disclose the electronic device of claim 1, wherein the processor is further configured to: correct sensor data of the detected user input, based on the acquired sensor information; and identify, based on the corrected sensor data, the type of the user input and the location of the user input (paragraph [0054], figure 2, process 208; paragraphs [0076]-[0078], figures 6A-6D; and paragraph [0093], figure 14).
Regarding claim 3, Lee et al. disclose the electronic device of claim 1, wherein the processor is further configured to change the display attribute by changing at least one of a size of a window or an arrangement of the window within a display area of the first display for displaying at least one of the first information corresponding to the first application and the second information corresponding to the second application (paragraphs [0076]-[0078]).
Regarding claim 4, Lee et al. disclose the electronic device of claim 1, further comprising: a second display (top layer) provided in the second housing, and configured to be at least partially visible from outside through the fourth surface (figure 3F, paragraph [0064]), wherein the sensor circuit comprises at least one of an inertial sensor or a grip sensor (paragraphs [0077]-[0078]).
Regarding claim 5, Lee et al. disclose the electronic device of claim 4, wherein the sensor information comprises at least one of first sensor information acquired through the inertial sensor, second sensor information acquired through the grip sensor, or third sensor information acquired through a touch circuit of the second display (paragraphs [0076]-[0078] and [0093]).
Regarding claim 6, Lee et al. disclose the electronic device of claim 5, wherein the first sensor information comprises at least one of sensor information related to a posture of the electronic device or sensor information related to movement of the electronic device, wherein the second sensor information comprises at least one of a grip state or a grip pattern of the electronic device, and wherein the third sensor information comprises touch information acquired through the touch circuit of the second display (paragraphs [0076]-[0078] and [0093]).
Regarding claim 7, Lee et al. disclose the electronic device of claim 5, wherein the processor is further configured to correct the sensor data of the detected user input, based on at least one of the first sensor information, the second sensor information, or the third sensor information (paragraphs [0076]-[0078]).
Regarding claim 10, Lee et al. (figures 1-3, 6 and 14) disclose a method for controlling a screen according to a user interaction by an electronic device (flexible device 100, 300, 600, paragraph [0044]) including a first housing having a first surface facing a first direction, a second surface facing a second direction opposite (half right part of the electronic device 300 in figures 3A-3C, which has a first (front) surface and a second (rear) surface opposite to the first surface), and a second housing connected to the first housing in a foldable manner, and having a third surface facing the first direction in an unfolded state and a fourth surface facing the second direction in the unfolded state (half left part of the electronic device 300 in figures 3A-3C, which has a third (front) surface and a fourth (rear) surface opposite to the third surface) (paragraphs [0056]-[0060]), the method comprising: displaying first information corresponding to a first application in a first area on a first display provided on at least a portion of the first surface and at least a portion of the third surface (paragraph [0051] and [0062], figure 3d, first window 312); displaying second information corresponding to a second application in a second area on the first display (paragraphs [0052]-[0053] and [0062], figure 3D, second window 314); acquiring sensor information through a sensor circuit; identifying whether a user input is detected on the second surface or the fourth surface based on the acquired sensor information (paragraph [0054], figure 2, process 208; and paragraph [0078], figure 6D); identifying, based on the detected user input, a type of the user input and a location of the user input (paragraph [0054], figure 2, process 208; paragraphs [0076]-[0078], figures 6A-6D; and paragraph [0093], figure 14); changing a display attribute of at least one of the first information corresponding the first application and the second information corresponding the second application, based on the type of the user input and the location of the user input (paragraph [0055], figure 2, process 210; and paragraph [0078], figure 6D); and displaying at least one of the first information and the second information on the first display, based on the changed display attribute (paragraph [0055], figure 2, process 210; and paragraph [0078], figure 6D).
Regarding claim 11, Lee et al. disclose the method of claim 10, wherein the identifying of the type of the user input and the location of the user input comprises: correcting sensor data of the detected user input, based on the acquired sensor information; and identifying, based on the corrected sensor data, the type of the user input and the location of the user input (paragraph [0054], figure 2, process 208; paragraphs [0076]-[0078], figures 6A-6D; and paragraph [0093], figure 14).
Regarding claim 12, Lee et al. disclose the method of claim 10, wherein the changing of the display attribute of the at least one comprises changing at least one of a size of a window or an arrangement of the window within a display area of the first display for displaying at least one of the first information corresponding to the first application and the second information corresponding to the second application (paragraphs [0076]-[0078]).
Regarding claim 13, Lee et al. disclose the method of claim 10, wherein the sensor information is acquired thorough at least one of an inertial sensor or a grip sensor (paragraphs [0077]-[0078]).
Regarding claim 14, Lee et al. disclose the method of claim 13, wherein the sensor information comprises at least one of first sensor information acquired through the inertial sensor, second sensor information acquired through the grip sensor, or third sensor information acquired through a touch circuit of a second display provided to be at least partially visible from outside through the fourth surface (paragraphs [0076]-[0078]).
Regarding claim 15, Lee et al. disclose the method of claim 14, wherein the first sensor information comprises at least one of sensor information related to a posture of the electronic device or sensor information related to movement of the electronic device; and wherein the second sensor information comprises at least one of a grip state or a grip pattern of the electronic device (paragraphs [0076]-[0078] and [0093]).
Regarding claim 16, Lee et al. disclose the method of claim 14, wherein the third sensor information comprises touch information acquired through the touch circuit of the second display (paragraphs [0076]-[0078] and [0093]).
Regarding claim 17, Lee et al. disclose the method of claim 14, wherein the correcting of the sensor data of the detected user input comprises correcting the sensor data of the detected user input, based on at least one of the first sensor information, the second sensor information, or the third sensor information (paragraphs [0076]-[0078]).
Regarding claim 20, Lee et al. (figures 1-3, 6 and 14) disclose a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which, when executed by a processor (figure 1, processor 110), cause by an electronic device (flexible device 100, 300, 600, paragraph [0044]) to: display first information corresponding to a first application in a first area on a first display (paragraph [0051] and [0062], figure 3d, first window 312); display second information corresponding to a second application in a second area on the first display (paragraphs [0052]-[0053] and [0062], figure 3D, second window 314); acquire sensor information through a sensor circuit (paragraph [0054], figure 2, process 208; and paragraph [0078], figure 6D); identify whether a user input is detected on a second surface or a fourth surface of the electronic device, based on the acquired sensor information (paragraph [0054], figure 2, process 208; paragraphs [0076]-[0078], figures 6A-6D; and paragraph [0093], figure 14); identify, based on the detected user input, a type of the user input and a location of the user input (paragraph [0054], figure 2, process 208; paragraphs [0076]-[0078], figures 6A-6D; and paragraph [0093], figure 14); change a display attribute of at least one of the first information corresponding to the first application and the second information corresponding to the second application, based on the type of the user input and the location of the user input (paragraph [0055], figure 2, process 210; and paragraph [0078], figure 6D); and display at least one of the first information and the second information on the first display, based on the changed display attribute (paragraph [0055], figure 2, process 210; and paragraph [0078], figure 6D).
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 8, 9, 18 and 19 are rejected under 35 U.S.C. 103 as being obvious over Lee et al. in view of Shah et al. (US 2022/0214725 A1).
The applied reference has a common assignee with the instant application. Based upon the earlier effectively filed date of the reference, it constitutes prior art under 35 U.S.C. 102(a)(2).
Regarding claims 8 and 18, Lee et al. disclose the electronic device and method of claim 1 and 10, respectively above. Lee et al. do explicitly disclose the electronic device further comprising a memory, wherein the processor and method further configured to: accumulate and store, in the memory, the sensor information acquired through the sensor circuit, the type of the user input and the location of the user input; generate an artificial intelligence (AI) model , through a learning process, based on the stored sensor information and the stored type of the user input and the location of the user input; and identify, based on the AI model generated by the learning process, the type of the user input and the location of the user input. However, Shah et al. (figures 1, 3 and 8) disclose an electronic device (computing device 100) comprising a memory (103A) and a method for controlling screen according to a user interaction by the electronic device comprising accumulate and store, in the memory, the sensor information acquired through the sensor circuit, the type of the user input and the location of the user input (paragraph [0088]); generate an artificial intelligence (AI) model (neural network Model 807), through a learning process, based on the stored sensor information and the stored type of the user input and the location of the user input; and identify, based on the AI model generated by the learning process, the type of the user input and the location of the user input (paragraph [0034],[0035], [0081-[0083] and [0090]-[0093]). Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to adapt the teaching above of Shah et al. to the electronic device and method of Lee et al. to intelligently predict user input for user convenience.
Regarding claims 9 and 19, Lee et al. disclose the electronic device and method of claim 1 and 10, respectively above. Lee et al. do explicitly disclose the electronic device further comprising a wireless communication circuit, wherein the processor and the method further configured to: transmit the sensor information to a server through the wireless communication circuit; receive an artificial intelligence (AI) model, learned through machine learning based on the sensor information, from the server; and identify the type of the user input and the location of the user input based on the AI model. However, Shah et al. (figures 1, 3 and 8) disclose an electronic device (computing device 100) comprising a wireless communication circuit (figure 8) and a method for controlling screen according to a user interaction by the electronic device comprising transmit the sensor information to a server (cloud computing device 860) through the wireless communication circuit; receive an artificial intelligence (AI) model (neural network Model 807), learned through machine learning based on the sensor information, from the server; and identify the type of the user input and the location of the user input based on the AI model (paragraph [0034],[0035], [0081-[0083] and [0088]-[0093]). Therefore, it would have been obvious for one having ordinary skill in the art before the effective filing date of the claimed invention to adapt the teaching above of Shah et al. to the electronic device and method of Lee et al. to intelligently predict user input for user convenience.
This rejection under 35 U.S.C. 103 might be overcome by: (1) a showing under 37 CFR 1.130(a) that the subject matter disclosed in the reference was obtained directly or indirectly from the inventor or a joint inventor of this application and is thus not prior art in accordance with 35 U.S.C.102(b)(2)(A); (2) a showing under 37 CFR 1.130(b) of a prior public disclosure under 35 U.S.C. 102(b)(2)(B); or (3) a statement pursuant to 35 U.S.C. 102(b)(2)(C) establishing that, not later than the effective filing date of the claimed invention, the subject matter disclosed and the claimed invention were either owned by the same person or subject to an obligation of assignment to the same person or subject to a joint research agreement. See generally MPEP § 717.02.
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Kim et al. (US 11,009,912 B2) disclose a mobile terminal includes a body having a variable size; a flexible display including a first display located on a front face of the body and a second display located on a rear face of the body; a driving unit configured to change both the variable size of the body and a size of the first display; a sensing unit configured to sense an input signal; and a controller configured to in response to the input signal having a value equal to or greater than a threshold value, control the driving unit to change both the variable size of the body and the size of the first display, in which a total area of the first display and the second display remains constant, and a size of the second display is correspondingly reduced as the size of the first display increases.
Klein et al. (US 11,138,912 B2) teach a bendable computing device operates in a single display region mode when the device is in an unbent posture and operates in a multiple display region mode when the device is in a bent posture; the multiple display region mode subdivides the bendable screen of the bendable computing device into a first display region and a second display region; user input gestures originating at an artificial hardware seam or terminating at the artificial hardware seam can be utilized to provide various types of functionality.
Ahn et al. (US 11,934,651 B2) disclose a mobile terminal which has a touch screen including a plurality of regions classified by software, or which is linked to another touch screen, the mobile terminal comprising: a first touch screen for displaying screen image information about an application being executed; a communication unit for performing a communication connection with a second touch screen linked to the first touch screen; and a control unit which generates a multi-touch event by combining a first touch event formed by a first touch input and a second touch event formed by a second touch input when the second touch input is applied to either the first or second touch screen in a state in which the first touch input is applied to the other one of the first and second touch screen, and which performs a function according to the multi-touch event with respect to the screen image information displayed on the touch screen to which the first touch input has been applied.
Chun et al. (US 12,063,754 B2 ) teach a flexible display device, comprising: a first body; a second body configured to be movable relative to the first body; a support plate installed on the rear surface of the second body and configured to be movable relative to the second body; a flexible display unit configured such that areas exposed to the outside are changed while the first body and the second body move relative to each other; and a driving unit generating a driving force such that the second body slides with respect to the first body, wherein as the driving unit causes a movement of the second body through a rotational force by means of a motor, the areas exposed to the outside of the front part and the rear part of the flexible display unit can be changed respectively.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to QUOCHIEN B VUONG whose telephone number is (571)272-7902. The examiner can normally be reached 10:00-06:00PM M-F.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, ANTJHONY ADDY can be reached at 571-272-7795. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/QUOCHIEN B VUONG/Primary Examiner, Art Unit 2645