DETAILED ACTION
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claims 1-13, 15-16 and 18-22 are pending.
Drawings
The drawings, FIGS. 3-6, are objected to under 37 CFR 1.83(a) because the unlabeled rectangular boxes (e.g. boxes 351-353 in FIG. 3, boxes 461-465 in FIG. 4, boxes 571-575 in FIG. 5, and boxes 681-685 in FIG. 6) shown in the drawings should be provided with descriptive text labels(see 37 CFR 1.83 and 37 CFR 1.84(n)). Any structural detail that is essential for a proper understanding of the disclosed invention should be shown in the drawing. MPEP § 608.02(d).
Corrected drawing sheets in compliance with 37 CFR 1.121(d) are required in reply to the Office action to avoid abandonment of the application. Any amended replacement drawing sheet should include all of the figures appearing on the immediate prior version of the sheet, even if only one figure is being amended. The figure or figure number of an amended drawing should not be labeled as “amended.” If a drawing figure is to be canceled, the appropriate figure must be removed from the replacement sheet, and where necessary, the remaining figures must be renumbered and appropriate changes made to the brief description of the several views of the drawings for consistency.
Additional replacement sheets may be necessary to show the renumbering of the remaining figures. Each drawing sheet submitted after the filing date of an application must be labeled in the top margin as either “Replacement Sheet” or “New Sheet” pursuant to 37 CFR 1.121(d). If the changes are not accepted by the examiner, the applicant will be notified and informed of any required corrective action in the next Office action. The objection to the drawings will not be held in abeyance.
Response to Amendment
Applicants’ response to the last Office Action, dated Jan. 29, 2026 has been entered and made of record. In view of the Applicant’s amendment of title, the objection to the specification is expressly withdrawn. Applicant's amendment necessitated the new ground(s) of rejection presented in this Office Action.
Response to Arguments
Applicant’s arguments, dated Jan. 29, 2026 have been considered but are moot because the arguments do not apply to all of the references being used in the current rejection. Please see the following claim rejections for detailed analysis.
Claim Rejections - 35 USC § 103
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office Action.
Claims 1, 10 and 21 are rejected under 35 U.S.C. 103 as being unpatentable over Guan (CN 113294634 A, hereinafter English translation by Clarivate Analytics) in view of Kosugi (US 2021/0397265 A1).
As to claim 1, Guan teaches a display device (Guan, FIG. 1, “intelligent sound box 100 comprises a screen adjusting structure”), comprising:
a flexible display screen (Guan, FIG. 1, “flexible screen 20”); and
one or more actuators (Guan, FIGS. 5-6, “when the motor comprises a motor shaft 30, the motor is always connected with the screw rod 32, specifically the motor shaft 30 can be always connected with the screw rod 32, or coaxial and so on”) configured to induce bending of the flexible display screen in a first direction to move the flexible display screen between (Guan, FIGS. 1-6, “ through a set of driving mechanism and the clutch structure design, the can flexible screen rotate the drive or slide up and down, the structure is simple and reliable, the production cost is low, the occupied space is small”) a first configuration (Guan, FIGS. 3-4, “flat working state”) and a second configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”),
wherein the flexible display screen (Guan, FIG. 1, “flexible screen 20”) is one of flat or concave in the first direction in the first configuration (Guan, FIGS. 3-4, “flat working state”), and the flexible display screen (Guan, FIG. 1, “flexible screen 20”) is convex in the first direction in the second configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”); and
one or more processors (Guan, FIGS. 5-7, “controller”) configured to:
control the one or more actuators (Guan, FIGS. 1-6, “through a set of driving mechanism and the clutch structure design, the can flexible screen rotate the drive or slide up and down”) to move the flexible display screen (Guan, FIG. 1, “flexible screen 20”) from the second configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”) to the first configuration (Guan, FIGS. 3-4, “flat working state”).
Guan does not teach “one or more first sensors configured to sense a subject, the one or more first sensors having a first resolution one or more second sensors configured to sense the subject, the one or more second sensors having a second resolution that is higher than the first resolution”; and moving the flexible display … “in response to a determination that the subject is in proximity of the flexible display screen, wherein the determination is made based on sensor outputs from the one or more first sensors without use of sensor outputs from the one or more second sensors, and display a content item based on sensing of the subject by the one or more second sensors when the flexible display screen is in the first configuration”.
However, Kosugi teaches the concepts of one or more first sensing mode (Kosugi, FIGS. 1-3, [0020], “low-resolution mode”) configured to sense a subject (Kosugi, FIGS. 1-3, e.g., “approach detection mode (state A)” and “leave detection mode (state B)” for “user U”), the one or more first sensing mode (Kosugi, FIGS. 1-3, [0020], “low-resolution mode”) having a first resolution (Kosugi, FIGS. 1-3, [0020], e.g., “low resolution, e.g., 4×4”);
one or more second sensing mode (Kosugi, FIGS. 1-3, [0020], “high-resolution mode”) configured to sense the subject (Kosugi, FIGS. 1-3, e.g., “gesture detection mode (state C)” for “user U”), the one or more second sensing mode (Kosugi, FIGS. 1-3, [0020], “high-resolution mode”) having a second resolution (Kosugi, FIGS. 1-3, [0020], e.g., “high resolution, e.g., 8×8”) that is higher than the first resolution (Kosugi, FIGS. 1-3, [0020], e.g., “low resolution, e.g., 4×4”); and
turning on or off the display … in response to a determination that the subject is in proximity of the flexible display screen (Kosugi, e.g., see FIGS. 1-3, “boot” in “approach detection mode” → “normal operating state” in “gesture detection mode”),
wherein the determination is made based on sensor outputs from the one or more first sensors without use of sensor outputs from the one or more second sensors (Kosugi, FIGS. 1-3, [0053], “proximity sensor 130 … outputs a detection signal according to the sampled luminous intensity (that is, a detection signal indicative of the detection amount according to the distance to an object (see FIG. 5))”; [0053], “detection mode control unit 220 switches between detection modes based on the detection result of the person detection unit 210”), and
display a content item based on sensing of the subject by the one or more second sensors (Kosugi, e.g., see FIG. 1, [0017], “in a state where a person is present in front of the electronic apparatus 1 (Presence) as illustrated in FIG. 1(B), the electronic apparatus 1 imposes such a restriction on the system so as not to make a transition to the standby state and to continue the normal operating state”).
At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to modify the “flexible screen 20” to further comprise two types of dedicated proximity sensors corresponding to the “low-resolution mode” for far distance and “high-resolution” for close distance and switch modes according to the user’s approach and presence, as taught by Kosugi, so that, when a user is in presence in close distance, the “flexible screen 20” displays a content “when the flexible display screen is in the first configuration (Guan, FIGS. 3-4, “flat working state”), in order to provide “an electronic apparatus and a control method capable of enabling detection of a gesture in addition to detection of a person while suppressing power consumption” (Kosugi, [0005]).
As to claim 10, Guan teaches the display device of claim 1, wherein the flexible display screen (Guan, FIG. 1, “flexible screen 20”) is generally cylindrical in the second configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”).
As to new claim 21, Guan in view of Kosugi teaches the display device of claim 1, wherein the one or more processors are further configured to control the one or more actuators to move the flexible display screen (Guan, FIGS. 1-6, “through a set of driving mechanism and the clutch structure design, the can flexible screen rotate the drive or slide up and down”) from the first configuration (Guan, FIGS. 3-4, “flat working state”) to the second configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”) in response to a determination that the subject has not been sensed by either of the one or more first sensors or by the one or more second sensors for a threshold time period (Kosugi, [0061], “when the person detection unit 210 no longer detects the person from the state where the person detection unit 210 is detecting the person within the person detection range (that is, when the leave of the person from the electronic apparatus 1 is detected), the operation control unit 240 causes the system processing by the system processing unit 300 to make the transition from the normal operating state to the standby state”). Examiner renders the same motivation as in claim 1.
Claims 2-4 are rejected under 35 U.S.C. 103 as being unpatentable over Guan (CN 113294634 A) in view of Kosugi (US 2021/0397265 A1) and Ohe et al. (US 2020/0319836 A1).
As to claim 2, Guan in view of Kosugi teaches the display device of claim 1, wherein the one or more second sensors (Kosugi, FIGS. 1-3, [0020], “high-resolution mode”) are configured to identify a location of the subject relative to the flexible display screen (Guan, FIGS. 1-6, “the detecting device is used for identifying the orientation of the user, the controller is electrically connected with the driving mechanism and the detecting device, according to the orientation of the user control driving mechanism driving flexible screen driving mechanism 20 around the shell 10 of the circumferential rotation; until the screen faces the orientation of the user”).
Guan does not teach “the display device further comprising: one or more processors configured to determine a content location along the first direction of the flexible display screen according to the location of the subject relative to the flexible display screen, and to display a content item on the flexible display screen according to the content location, wherein the content location is determined such that the content item is viewable from the location of the subject when the content item is displayed according to the content location”.
However, Ohe teaches the concepts of one or more processors (Ohe, FIG. 2, [0038], “controller 50”) configured to determine a content location along the first direction of the flexible display screen according to the location of the subject relative to the flexible display screen, and to display a content item on the flexible display screen according to the content location, wherein the content location is determined such that the content item is viewable from the location of the subject when the content item is displayed according to the content location (Ohe, FIGS. 5A-5B[0064], “display unit 2 may cause the detector 40 to acquire the images of the viewer V, the information regarding the distance between each of the imaging units 43 and the viewer V, or the information regarding the changes in the voltages detected by the plurality of piezoelectric sensors 23 at all times in accordance with, for example, instructions of the controller 50. This variety of acquired information is stored in the ROM or the like of the control board 14. The analyzer 51 calculates the position and inclination of the face and the position of both eyes of the viewer V, etc. on the basis of the detection signal S1, and calculates a curvature of a portion of the display surface 24S that faces the viewer V, and sends them as the analysis signal S2 to the image processor 52. The image processor 52 creates a flat virtual screen VS2 that faces the viewer V on the basis of the analysis signal S2. The flat virtual screen VS2 that faces the viewer V is orthogonal to image light L directed to the viewer V”).
At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to further comprise the “detector 40” and the “analyzer 51”, as taught by Ohe, in order to provide “a display unit that is able to provide a viewing environment comfortable for a viewer even in an interior space with a limited size” (Ohe, [0005]).
As to claim 3, Ohe teaches the display device of claim 2, wherein the one or more processors are further configured to determine a reference line (Ohe, FIG. 5A, [0064], e.g., the line orthogonal to the “flat virtual screen VS2”) between the subject (Ohe, FIG. 5A, [0064], “viewer V”) and the flexible display screen (Ohe, FIG. 5A, [0064], “display surface 24S”), and to determine the content location according to a location of the reference line with respect to the flexible display screen (Ohe, FIGS. 5A-5B, [0064], “The image processor 52 creates a flat virtual screen VS2 that faces the viewer V on the basis of the analysis signal S2. The flat virtual screen VS2 that faces the viewer V is orthogonal to image light L directed to the viewer V”). Examiner renders the same motivation as in claim 2.
As to claim 4, Guan in view of Ohe teaches the display device of claim 3, wherein the one or more processors (Ohe, FIG. 2, [0038], “controller 50”) are further configured to determine the reference line (Ohe, FIG. 5A, [0064], e.g., the line orthogonal to the “flat virtual screen VS2”) such that the reference line (Ohe, FIG. 5A, [0064], e.g., the line orthogonal to the “flat virtual screen VS2”) is perpendicular to the flexible display screen (Guan, FIG. 1, “flexible screen 20” in “flat working state”) and such that the reference line (Ohe, FIG. 5A, [0064], e.g., the line orthogonal to the “flat virtual screen VS2”) extends from the flexible display screen (Guan, FIG. 1, “flexible screen 20” in “flat working state”) to the location of the subject (Ohe, FIG. 5A, [0064], “viewer V”). Examiner renders the same motivation as in claim 2.
Claims 5-9, 11, 15 and 22 are rejected under 35 U.S.C. 103 as being unpatentable over Guan (CN 113294634 A) in view of Kosugi (US 2021/0397265 A1), Ohe et al. (US 2020/0319836 A1) and An (US 2018/0137801 A1).
As to claim 5, Guan in view of Kosugi and Ohe does not teach the display device of claim 2, wherein the one or more processors are further configured to identify the location of the subject relative to the flexible display screen as corresponding to a predefined environment zone, and to select the content location from a group of predefined content locations based on the predefined environment zone.
However, An teaches the concept that the one or more processors are further configured to identify the location of the subject relative to the flexible display screen (An, FIGS. 6A-6B, [0119], “the sensor 120 may include first through fourth gaze detecting sensors 611 through 617 arranged at a predetermined interval on a front surface (or a side surface) of the flexible display device 10”) as corresponding to a predefined environment zone (An, FIG. 6A, [0119], “predetermined range 620”), and to select the content location from a group of predefined content locations based on the predefined environment zone (An, FIG. 10, [0141]-[0145], “determine activated region S1020” → “control content to be displayed in activated region S1030”).
At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to modify the “flexible screen 20” with the dedicated sensors for the “low/high resolution modes” taught by Guan in view of Kosugi to further comprise the “gaze detecting sensors 611 through 617” along with the dedicated sensors for the “low/high resolution modes”, as taught by An, in order to provide “a displaying method in consideration of a visible range of a user according to a bending or folding property of the flexible display” (An, [0006]).
As to claim 6, Guan in view of Kosugi and An teaches the display device of claim 1, wherein the one or more first sensors and the one or more second sensors (Kosugi, FIGS. 1-3, dedicated sensors for “low-resolution mode” and “high-resolution mode”, respectively) are connected to the flexible display screen (An, FIGS. 6A-6B, [0119], “arranged at a predetermined interval on a front surface (or a side surface) of the flexible display device 10”) and move with the flexible display screen (An, e.g., see FIG. 6B) between the first configuration (Guan, FIGS. 3-4, “flat working state”) and the second configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”). Examiner renders the same motivation as in claim 5.
As to claim 7, Kosugi in view of An teaches the display device of claim 6, wherein the one or more first sensors (Kosugi, FIGS. 1-3, dedicated sensors for “low-resolution mode”) are arranged in an array along a bezel of the flexible display screen and are spaced from one another along the first direction (An, see FIGS. 6A-6B, [0119], “arranged at a predetermined interval on a front surface (or a side surface) of the flexible display device 10”). Examiner renders the same motivation as in claim 5.
As to claim 8, An teaches the display device of claim 7, wherein the flexible display screen has a width defined between a first end and a second end that are spaced from one another in the first direction (An, see FIG. 6A), and a first pair of the one or more first sensors (Kosugi, FIGS. 1-3, dedicated sensors for “low-resolution mode”) is spaced apart by a distance equal to or greater than twenty-five percent of the width of the flexible display screen (An, see FIG. 6A, [0119], “arranged at a predetermined interval on a front surface (or a side surface) of the flexible display device 10”). Examiner renders the same motivation as in claim 5.
As to claim 9, Kosugi teaches the display device of claim 6, wherein the one or more first sensors (Kosugi, FIGS. 1-3, dedicated sensors for “low-resolution mode”) include one or more of an infrared sensor (Kosugi, FIGS. 1-3, [0093], “proximity sensor 130 which detects infrared light … coming from a person”), an ultrasonic sensor, a radar sensor, or a pulse lidar sensor, and wherein the one or more second sensors (Kosugi, FIGS. 1-3, dedicated sensors for “high-resolution mode”; [0043], “imaging unit 120 may be an infrared camera or may be a normal camera”) include one or more cameras. Examiner renders the same motivation as in claim 1.
As to claim 11, Guan teaches a display device (Guan, FIG. 1, “intelligent sound box 100 comprises a screen adjusting structure”), comprising:
a display screen (Guan, FIG. 1, “flexible screen 20”);
one or more actuators (Guan, FIGS. 1-6, “through a set of driving mechanism and the clutch structure design, the can flexible screen rotate the drive or slide up and down, the structure is simple and reliable, the production cost is low, the occupied space is small”) configured to move the display screen (Guan, FIG. 1, “flexible screen 20”) between a stored configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”) and a deployed configuration (Guan, FIG. 1, “flexible screen 20” in “flat working state”).
Guan does not teach “one or more sensors configured to identify a location of a subject and a gaze direction of the subject relative to the display screen”.
However, An teaches the concepts of one or more sensors (An, FIGS. 6A-6B, [0119], “gaze detecting sensors 611 through 617”) configured to identify a location of a subject and a gaze direction of the subject (An, FIGS. 6A-6B, [0119], “gaze detecting sensors 611 through 617”) relative to the display screen (An, see FIG. 6A, [0119], “flexible display device 10”).
Please see the combination statement and motivation to combine with An in claim 5.
Guan in view of An does not teach “one or more processors configured to determine whether one or more transition conditions are satisfied based on the location of the subject and the gaze direction of the subject, and configured to control the one or more actuators to move the display screen between the stored configuration and deployed configuration in response to a determination that the one or more transition conditions are satisfied”.
However, Ohe teaches the concepts of one or more processors configured to determine whether one or more transition conditions are satisfied based on the location of the subject and the gaze direction of the subject (Ohe, FIGS. 5A-5B, [0064], “The analyzer 51 calculates the position and inclination of the face and the position of both eyes of the viewer V, etc. on the basis of the detection signal S1, and calculates a curvature of a portion of the display surface 24S that faces the viewer V, and sends them as the analysis signal S2 to the image processor 52”), and
configured to control the one or more actuators to move the display screen (Guan, FIGS. 1-6, “the detecting device is used for identifying the orientation of the user, the controller is electrically connected with the driving mechanism and the detecting device, according to the orientation of the user control driving mechanism driving flexible screen driving mechanism 20 around the shell 10 of the circumferential rotation; until the screen faces the orientation of the user”) between the stored configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”) and deployed configuration (Guan, FIG. 1, “flexible screen 20” in “flat working state”) in response to a determination that the one or more transition conditions are satisfied (Ohe, FIGS. 5A-5B, [0064], “The image processor 52 creates a flat virtual screen VS2 that faces the viewer V on the basis of the analysis signal S2. The flat virtual screen VS2 that faces the viewer V is orthogonal to image light L directed to the viewer V”).
Please see the combination statement and motivation to combine with Ohe in claim 5.
Guan in view of An and Ohe does not teach “wherein the location of the subject includes a distance of the subject from the display screen,wherein the one or more transition conditions include a comparison of the distance of the subject from the display screen to a first distance range for a transition from the stored configuration to the deployed configuration, andwherein the one or more transition conditions include a comparison of the distance of the subject from the display screen to a second distance range for a transition from the deployed configuration to the stored configuration, wherein the second distance range corresponds to locations that are further from the display screen as compared to the first distance range”.
However, Kosugi teaches the concept that the location of the subject includes a distance of the subject (Kosugi, e.g., see FIGS. 1-3) from the display screen (Guan, FIG. 1, “flexible screen 20”),
wherein the one or more transition conditions (Kosugi, see FIGS. 1-3, e.g., distances determining “approach/leave detection mode” and “gesture detection mode”) include a comparison of the distance of the subject from the display screen to a first distance range ((Kosugi, FIGS. 1-3, “approach/leave detection mode” for far distance and “gesture detection mode” for close distance)) for a transition from the stored configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”) to the deployed configuration (Guan, FIG. 1, “flexible screen 20” in “flat working state”), and
wherein the one or more transition conditions include a comparison of the distance of the subject from the display screen to a second distance range (Kosugi, FIGS. 1-3, “approach/leave detection mode” for far distance and “gesture detection mode” for close distance) for a transition from the deployed configuration (Guan, FIG. 1, “flexible screen 20” in “flat working state”) to the stored configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”),
wherein the second distance range corresponds to locations that are further from the display screen as compared to the first distance range (Kosugi, FIGS. 1-3, “approach/leave detection mode” for far distance and “gesture detection mode” for close distance).
Please see the combination statement and motivation to combine with Kosugi in claim 1.
As to claim 15, Guan in view of An teaches the display device of claim 11, wherein the one or more sensors (An, FIGS. 6A-6B, [0119], “gaze detecting sensors 611 through 617”) are connected to the display screen and move with the display screen (An, see FIG. 6A, [0119], “flexible display device 10”) between the stored configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”) and the deployed configuration (Guan, FIG. 1, “flexible screen 20” in “flat working state”). Examiner renders the same motivation as in claim 5.
As to new claim 22, Kosugi teaches the display device of claim 11, wherein the one or more transition conditions include a determination that the subject is stationary prior to the transition from the stored configuration to the deployed configuration, and the subject is determined to be stationary when the subject is moving by less than a threshold amount (Kosugi, [0051], “unlike an object, the person moves a little even when staying still, that is, the person shakes at least a little. Therefore, such a condition that shaking is detected from a change in detection distance may be added to the person detection conditions”). Examiner renders the same motivation as in claim 1.
Claim 12 is rejected under 35 U.S.C. 103 as being unpatentable over Guan (CN 113294634 A) in view of An (US 2018/0137801 A1), Ohe et al. (US 2020/0319836 A1), Kosugi (US 2021/0397265 A1), Lee et al. (US 2023/0405435 A1), and Henderek et al. (US 2014/0247208 A1).
As to claim 12, Guan in view of An, Ohe and Kosugi does not teach the display device of claim 11, wherein the one or more transition conditions include an intention probability score that is determined based on the location of the subject and an amount of time over which the gaze direction of the subject has corresponded to looking toward the display screen, and represents a likelihood that the subject intends transition of the display screen between the stored configuration and the deployed configuration.
However, Lee teaches the concept that the one or more transition conditions include an intention probability score that is determined based on the location of the subject, and represents a likelihood that the subject intends involvement (Lee, FIG. 9, [0211]-[0212], “the processor 240 may identify at least one of the at least one object as the training target object based on at least one of a location of the at least one object (specifically, a user) detected in the first image, a gaze direction of the at least one object, or a distance between the at least one object and the display device 400. Specifically, the estimated score of intention of involvement may be calculated based on at least one of a location of at least one object detected in the first image, a gaze direction of the at least one object, or a distance between the at least one object and the display device 400”).
At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to modify the “analyzer 51” taught by Ohe to further estimate the “score of intention of involvement”, as taught by Lee, so as to apply the score for the transition of the display screen between the stored configuration (Guan, see FIGS. 1-2, “general state” or “bending working state”) and the deployed configuration (Guan, FIG. 1, “flexible screen 20” in “flat working state”), as taught by Guan, in order for automatic transition between the two configurations.
Guan in view of An, Ohe, Kosugi and Lee does not teach the conditions include “an amount of time over which the gaze direction of the subject has corresponded to looking toward the display screen”.
However, Henderek teaches the concept that the conditions include an amount of time over which the gaze direction of the subject has corresponded to looking toward the display screen (Henderek, FIG. 5, [0051], “the gaze detection program module 123 may be configured to recognize the gaze point 130 as a signal of the user's intent to wake the computing device 101 if the user dwells or fixates on the wake zone 502 for a predetermined period of time (e.g., if the gaze point 130 remains in the vicinity of the wake zone 502 until expiration of a threshold amount of time)”).
At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to modify the “analyzer 51” taught by Ohe to further recognize “user’s intent to wake the display screen” by the “user gaze dwelling or fixating time”, as taught by Henderek, in order for “automatically waking a computing device from a stand-by mode in response to gaze detection” (Henderek, [0007]).
Claim 13 is rejected under 35 U.S.C. 103 as being unpatentable over Guan (CN 113294634 A) in view of An (US 2018/0137801 A1), Ohe et al. (US 2020/0319836 A1), Kosugi (US 2021/0397265 A1), Lee et al. (US 2023/0405435 A1), Henderek et al. (US 2014/0247208 A1) and Majdabadi et al. (US 2025/0103790 A1).
As to claim 13, Lee teaches the display device of claim 11, wherein the one or more transition conditions include an output from a trained machine learning model,
wherein the trained machine learning model receives input information derived from the location of the subject (Lee, [0212], “a location of the at least one object (specifically, a user) detected in the first image”) and gaze direction of the subject (Lee, [0212], “a gaze direction of the at least one object”),and wherein the trained machine learning model estimates whether the subject intends transition of the display screen between the stored configuration and the deployed configuration based on the location of the subject and the gaze direction of the subject (Lee, [0210], “The operation of detecting the human in the first image and the operation of recognizing or extracting the training target object may be performed by using a computer vision technology, AI object recognition, a machine learning technology, and the like. Furthermore, the operation of detecting the human (or the user) in the first image and the operation of recognizing or extracting the training target object may be performed by an external device receiving the first image (e.g., a server for performing AI operations)”). Examiner renders the same motivation as in claim 12.
Guan in view of An, Ohe, Kosugi, Lee and Henderek does not teach the trained machine learning model is “trained using training examples that correlate subject behavior to a presence or absence intent to transition the display screen between the store configuration and the deployed configuration”; and the trained machine learning model receives “time of day, a schedule associated with the subject, and historical information of previous use of the display device by the subject”.
However, Majdabadi teaches the concepts that the trained machine learning model is trained using training examples that correlate subject behavior to a presence or absence intent to transition the display screen between the store configuration and the deployed configuration; and the trained machine learning model receives “time of day, a schedule associated with the subject, and historical information of previous use of the display device by the subject (Majdabadi, [0076], “ the method may be directed to apply conditional formatting when one or more of the following prerequisites have been identified or reached a threshold level: automated report viewer intent and interest based on their role and past behavior … trained machine learning models … applied machine learning models to determine applicable conditional formatting rules, automated report generation and distribution, customized conditional formatting based on viewer role, predicted intent, and interest”, etc.).
At the time of effective filing date, it would have been obvious to one of ordinary skill in the art to modify the “machine learning” implemented by Lee to further comprise “user’s past behavior, predicted intent, and interest”, etc., as taught by Majdabadi, in order to “receive information about a user, tracking a gaze of the user on a user interface, generating a real-time gaze heat map for the user, feeding information about the user and the real-time gaze heat map into a machine learning model” (Majdabadi, [0004]).
Allowable Subject Matter
Claims 16 and 18-20 are allowed.
The following is an examiner’s statement of reasons for allowance:
As to claim 16, it is persuasive that “claim 16 is amended to include allowable subject matter from dependent claim 17, which is cancelled” (Remarks, p. 13).
Accordingly, the closest known prior art, i.e., Guan (CN 113294634 A), Ohe et al. (US 2020/0319836 A1), An (US 2018/0137801 A1), Lee et al. (US 2023/0405435 A1), Jung et al. (US 2013/0265262 A1), Kosugi (US 2021/0397265 A1), Lee et al. (US 2020/0104581 A1) and Im et al. (US 2013/0050425 A1), alone or in reasonable combination, fails to teach limitations in consideration of the claims as a whole, specifically with respect to the limitations “a fixed display screen that is coupled to the base structure, wherein the display screen and the fixed display screen are controllable to display a continuous image when the display screen is in the stored configuration”.
As to claims 18-20, they depend from claim 16, and is allowed at least for the reason above.
Conclusion
The prior arts made of record and not relied upon are considered pertinent to applicant’s disclosure: Lee et al. (US 2020/0104581 A1) teaches the concept of “controlling the electronic device to move … based on the first distance being within a second predefined range” (Abs.); and Im et al. (US 2013/0050425 A1) teaches the concept that “a resolution of the depth image is adjusted according to a gesture recognition mode” (Abs.).
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RICHARD J HONG whose telephone number is (571) 270-7765. The examiner can normally be reached on 9:00 AM to 6:00 PM EST.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Chanh Nguyen can be reached on (571) 272-7772. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
Mar. 20, 2026
/RICHARD J HONG/Primary Examiner, Art Unit 2623
***