Prosecution Insights
Last updated: April 19, 2026
Application No. 18/741,136

Display Method and Related Apparatus

Non-Final OA §102§103§DP§Other
Filed
Jun 12, 2024
Examiner
BARRETT, RYAN S
Art Unit
2148
Tech Center
2100 — Computer Architecture & Software
Assignee
Huawei Technologies Co., Ltd.
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 1m
To Grant
99%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
263 granted / 409 resolved
+9.3% vs TC avg
Strong +44% interview lift
Without
With
+43.7%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
24 currently pending
Career history
433
Total Applications
across all art units

Statute-Specific Performance

§101
10.6%
-29.4% vs TC avg
§103
38.7%
-1.3% vs TC avg
§102
12.9%
-27.1% vs TC avg
§112
10.8%
-29.2% vs TC avg
Black line = Tech Center average estimate • Based on career data from 409 resolved cases

Office Action

§102 §103 §DP §Other
DETAILED ACTION The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is responsive to the Application filed on 8/26/2024. Claims 1-14 are pending in the case. Claims 1, 6, and 13-14 are independent claims. Double Patenting The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory obviousness-type double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 U.S.P.Q.2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 U.S.P.Q.2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 U.S.P.Q. 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 U.S.P.Q. 761 (C.C.P.A. 1982); In re Vogel, 422 F.2d 438, 164 U.S.P.Q. 619 (C.C.P.A. 1970); and In re Thorington, 418 F.2d 528, 163 U.S.P.Q. 644 (C.C.P.A. 1969). A timely filed terminal disclaimer in compliance with 37 C.F.R. §§ 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on a nonstatutory double patenting ground provided the conflicting application or patent either is shown to be commonly owned with this application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. Effective January 1, 1994, a registered attorney or agent of record may sign a terminal disclaimer. A terminal disclaimer signed by the assignee must fully comply with 37 C.F.R. § 3.73(b). Claims 1-4, 6-11, and 13-14 are rejected on the ground of nonstatutory double patenting over the claims of US 12,039,144 B2. Instant Application US 12,039,144 B2 1. A display method, wherein the display method is applied to an information interaction system, the information interaction system comprises an electronic device and a large-screen device, and the method comprises: obtaining, by the electronic device, preset information, wherein the preset information comprises information about an application installed on the electronic device, and the information about the application comprises at least one of a name, an icon, and a package name of the application; generating, by the electronic device, a first window based on the preset information, wherein the first window comprises any one of a plurality of desktops of the electronic device; and sending, by the electronic device, the first window to the large-screen device, so that the large-screen device displays a first user interface, wherein the first user interface comprises the first window. 1. A method comprising: obtaining preset information about a first application installed on the mobile device, wherein the preset information comprises at least one of a name of the first application, an icon of the first application, or a package name of the first application; generating, based on the preset information, a first window comprising one of a plurality of desktops of the mobile device and a first control; sending, to the large-screen device, the first window to enable the large-screen device to display a first user interface consisting of the first window; and 2. The method according to claim 1, wherein the first window further comprises a first control, the first control is configured to adjust a quantity of desktops of the electronic device comprised in the first window; and after the sending, by the electronic device, the first window to the large-screen device, the method further comprises: receiving, by the electronic device, a first instruction sent by the large-screen device, wherein the first instruction is an instruction generated by the large-screen device in response to a first operation on the first control, and the first instruction is used to obtain two desktops of the electronic device; generating, by the electronic device, a second window according to the first instruction, wherein the second window comprises any two of the plurality of desktops of the electronic device; and sending, by the electronic device, the second window to the large-screen device, so that the large-screen device displays a second user interface, wherein the second user interface comprises the second window. 2. The method of claim 1, wherein the first window further comprises a first control configured to adjust a quantity of the desktops in the first window, and wherein the method further comprises: receiving, from the large screen device, a first instruction in response to a first operation on the first control, wherein the first instruction instructs to obtain two desktops of the mobile device; generating, according to the first instruction, a second window consisting of the two desktops; and sending, to the large screen device, the second window to enable the large screen device to display a second user interface consisting of the second window. 3. The method according to claim 2, wherein the electronic device comprises three or more desktops, the second window further comprises the first control, and after the sending, by the electronic device, the second window to the large-screen device, the method further comprises: receiving, by the electronic device, a second instruction sent by the large-screen device, wherein the second instruction is an instruction generated by the large-screen device in response to a second operation on the first control in the second window, and the second instruction is used to obtain three desktops of the electronic device; generating, by the electronic device, a third window according to the second instruction, wherein the third window comprises any three of the plurality of desktops of the electronic device; and sending, by the electronic device, the third window to the large-screen device, so that the large-screen device displays a third user interface, wherein the third user interface comprises the third window. 3. The method of claim 2, wherein the second window further comprises the first control, and wherein after sending the second window, the method further comprises: receiving, from the large-screen device, a second instruction in response to a second operation on the first control in the second window, wherein the mobile device comprises three or more desktops, and wherein the second instruction instructs to obtain three desktops of the mobile device; generating, according to the second instruction, a third window consisting of the three desktops; and sending, to the large-screen device, the third window to enable the large-screen device to display a third user interface consisting of the third window. 4. The method according to claim 1, wherein the first window further comprises a second control, the second control is configured to display, in the large-screen device, an interface of an application running in the electronic device, and after the sending, by the electronic device, the first window to the large-screen device, the method further comprises: receiving, by the electronic device, a third instruction sent by the large-screen device, wherein the third instruction is an instruction generated by the large-screen device in response to a third operation on the second control; generating, by the electronic device, a fourth window according to the third instruction, wherein the fourth window comprises the interface of the application running in the electronic device; and sending, by the electronic device, the fourth window to the large-screen device, so that the large-screen device displays a fourth user interface, wherein the fourth user interface comprises the fourth window. 4. The method of claim 1, wherein the first window further comprises a second control configured to display, in the large-screen device, an interface of a second application running in the mobile device, and wherein after sending the first window, the method further comprises: receiving, from the large-screen device, a third instruction in response to a third operation on the second control; generating, according to the third instruction, a fourth window comprising the interface; and sending, to the large-screen device, the fourth window to enable the large-screen device to display a fourth user interface consisting of the fourth window. 6. A display method, wherein the display method is applied to an information interaction system, the information interaction system comprises an electronic device and a large-screen device, and the method comprises: receiving, by the large-screen device, a first window sent by the electronic device, wherein the first window is a window generated by the electronic device based on preset information, the preset information comprises information about an application installed on the electronic device, and the information about the application comprises at least one of a name, an icon, and a package name of the application; and displaying, by the large-screen device, a first user interface based on the first window, wherein the first user interface comprises the first window, and the first window comprises any one of a plurality of desktops of the electronic device. 5. A method comprising: receiving, from the mobile device, a first window based on preset information about a first application installed on the mobile device, wherein the preset information comprises at least one of a name of the first application, an icon of the first application, or a package name of the first application, and wherein the first window comprises one of a plurality of desktops of the mobile device and a first control; displaying, by the large-screen device, based on the first window, a first user interface comprising the first window; and 7. The method according to claim 6, wherein the first window further comprises a first control, the first control is configured to adjust a quantity of desktops of the electronic device comprised in the first window, and after the displaying, by the large-screen device, a first user interface based on the first window, the method further comprises: receiving, by the large-screen device, a first operation on the first control; and displaying, by the large-screen device, a second user interface in response to the first operation, wherein the second user interface comprises a second window, and the second window comprises any two of the plurality of desktops of the electronic device. 6. The method of claim 5, wherein the first window further comprises a first control configured to adjust a quantity of the desktops in the first window, and wherein after displaying the first user interface, the method further comprises: receiving a first operation on the first control; and displaying, in response to the first operation, a second user interface comprising a second window, wherein the second window consists of two of the desktops. 8. The method according to claim 7, wherein the electronic device comprises three or more desktops, the second window further comprises the first control, and after the displaying, by the large-screen device, a second user interface, the method further comprises: receiving, by the large-screen device, a second operation on the first control in the second window; and displaying, by the large-screen device, a third user interface in response to the second operation, wherein the third user interface comprises a third window, and the third window comprises any three of the plurality of desktops of the electronic device. 7. The method of claim 5, wherein the mobile device comprises three or more desktops, wherein the second window further comprises the first control, and wherein after displaying the second user interface, the method further comprises: receiving a second operation on the first control in the second window; and displaying, in response to the second operation, a third user interface consisting of a third window, wherein the third window consists of three of the desktops. 9. The method according to claim 6, wherein the first window further comprises a second control, the second control is configured to display, in the large-screen device, an interface of an application running in the electronic device, and after the displaying, by the large-screen device, a first user interface based on the first window, the method further comprises: receiving, by the large-screen device, a third operation on the second control; and displaying, by the large-screen device, a fourth user interface in response to the third operation, wherein the fourth user interface comprises the interface of the application running in the electronic device. 8. The method of claim 5, wherein the first window further comprises a second control configured to display, in the large-screen device, an interface of a second application running in the mobile device, and wherein after displaying the first user interface, the method further comprises: receiving a third operation on the second control; and displaying, in response to the third operation, a fourth user interface consisting of the interface of the second application. 10. The method according to claim 9, wherein after the displaying, by the large-screen device, a fourth user interface, the method further comprises: receiving, by the large-screen device, a fourth operation on a first application interface in the fourth user interface; and displaying, by the large-screen device, a fifth user interface in response to the fourth operation, wherein the fifth user interface comprises the first application interface that is activated, and the first application interface is an interface of any application in interfaces of applications running in the electronic device. 9. The method of claim 8, wherein after displaying the fourth user interface, the method further comprises: receiving a fourth operation on a first application interface in the fourth user interface; and displaying, in response to the fourth operation, a fifth user interface consisting of the first application interface. 11. The method according to claim 10, wherein the fifth user interface comprises the second control, and after the displaying, by the large-screen device, a fifth user interface, the method further comprises: receiving, by the large-screen device, a fifth operation on the second control in the fifth user interface; and displaying, by the large-screen device, the fourth user interface in response to the fifth operation. 10. The method of claim 9, wherein the fifth user interface comprises the second control, and wherein after displaying the fifth user interface, the method further comprises: receiving a fifth operation on the second control in the fifth user interface; and displaying, in response to the fifth operation, the fourth user interface. 13. An electronic device, wherein the electronic device comprises one or more processors, a memory, and a communications module, the memory and the communications module are coupled to the one or more processors, the memory stores a computer program, and when the one or more processors execute the computer program, the electronic device performs: obtaining, by the electronic device, preset information, wherein the preset information comprises information about an application installed on the electronic device, and the information about the application comprises at least one of a name, an icon, and a package name of the application; generating, by the electronic device, a first window based on the preset information, wherein the first window comprises any one of a plurality of desktops of the electronic device; and sending, by the electronic device, the first window to the large-screen device, so that the large-screen device displays a first user interface, wherein the first user interface comprises the first window. 1. A method comprising: obtaining preset information about a first application installed on the mobile device, wherein the preset information comprises at least one of a name of the first application, an icon of the first application, or a package name of the first application; generating, based on the preset information, a first window comprising one of a plurality of desktops of the mobile device and a first control; sending, to the large-screen device, the first window to enable the large-screen device to display a first user interface consisting of the first window; and 14. A large-screen device, wherein the large-screen device comprises one or more processors, a memory, and a communications module, the memory and the communications module are coupled to the one or more processors, the memory stores a computer program, and when the one or more processors execute the computer program, the large-screen device performs: receiving, by the large-screen device, a first window sent by the electronic device, wherein the first window is a window generated by the electronic device based on preset information, the preset information comprises information about an application installed on the electronic device, and the information about the application comprises at least one of a name, an icon, and a package name of the application; and displaying, by the large-screen device, a first user interface based on the first window, wherein the first user interface comprises the first window, and the first window comprises any one of a plurality of desktops of the electronic device. 5. A method comprising: receiving, from the mobile device, a first window based on preset information about a first application installed on the mobile device, wherein the preset information comprises at least one of a name of the first application, an icon of the first application, or a package name of the first application, and wherein the first window comprises one of a plurality of desktops of the mobile device and a first control; displaying, by the large-screen device, based on the first window, a first user interface comprising the first window; and Claims 5 and 12 are rejected on the ground of nonstatutory double patenting over US 12,039,144 B2 in view of Wasko (US 2011/0061010 A1). As to dependent claim 5, the rejection of claim 1 is incorporated. US 12,039,144 B2 does not appear to expressly teach a method wherein the first window further comprises a third control, and the third control is configured to search the first window for an application on the electronic device. Wasko teaches a method wherein the first window further comprises a third control, and the third control is configured to search the first window for an application on the electronic device (“The management application program typically allows the user to view, browse and/or search application programs on the PED 102 using the GUI on the computing device 104,” paragraph 0017 lines 8-11). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the window of US 12,039,144 B2 to comprise the search control of Wasko. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely making it easily find a particular application (“rather than having to look through all the pages and/or scroll through the list of user selectable application list, the sort feature 264 and/or the search feature 262 provides a user-friendly and efficient method to locate specific application icons and/or user selectable application,” Wasko paragraph 0034 lines 4-9). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 12, the rejection of claim 6 is incorporated. US 12,039,144 B2 does not appear to expressly teach a method wherein the first window further comprises a third control, and the third control is configured to search the first window for an application on the electronic device. Wasko teaches a method wherein the first window further comprises a third control, and the third control is configured to search the first window for an application on the electronic device (“The management application program typically allows the user to view, browse and/or search application programs on the PED 102 using the GUI on the computing device 104,” paragraph 0017 lines 8-11). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the window of US 12,039,144 B2 to comprise the search control of Wasko. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely making it easily find a particular application (“rather than having to look through all the pages and/or scroll through the list of user selectable application list, the sort feature 264 and/or the search feature 262 provides a user-friendly and efficient method to locate specific application icons and/or user selectable application,” Wasko paragraph 0034 lines 4-9). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). Claim Rejections - 35 U.S.C. § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. § 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale or otherwise available to the public before the effective filing date of the claimed invention. Claims 1, 4, 6, 9-11, and 13-14 are rejected under 35 U.S.C. § 102(a)(1) as being anticipated by Yun et al. (US 2012/0088548 A1, hereinafter Yun). As to independent claim 1, Yun discloses a display method, wherein the display method is applied to an information interaction system, the information interaction system comprises an electronic device and a large-screen device (“when the mobile terminal 100 and the display device 200 are connected to each other, the second controller 280 of the display device 200 can generate and display a monitor window 400 for the first screen image on the second display unit 251,” paragraph 0080 lines 1-5), and the method comprises: obtaining, by the electronic device, preset information, wherein the preset information comprises information about an application installed on the electronic device (“referring to FIG. 5(5-1), at least two home screen images are prepared in the mobile terminal 100 in advance,” paragraph 0100 lines 1-3), and the information about the application comprises at least one of a name, an icon, and a package name of the application (“at least one object such as an application icon, a menu icon, a file icon, a widget and the like can be provided to each of the home screen images,” paragraph 0101 lines 1-3); generating, by the electronic device, a first window based on the preset information, wherein the first window comprises any one of a plurality of desktops of the electronic device (“referring to FIG. 5(5-1), at least two home screen images are prepared in the mobile terminal 100 in advance,” paragraph 0100 lines 1-3); and sending, by the electronic device, the first window to the large-screen device, so that the large-screen device displays a first user interface (“when the mobile terminal 100 and the display device 200 are connected to each other, the second controller 280 of the display device 200 can generate and display a monitor window 400 for the first screen image on the second display unit 251,” paragraph 0080 lines 1-5), wherein the first user interface comprises the first window (figure 3 part 400). As to dependent claim 4, Yun further discloses a method wherein the first window further comprises a second control (“object A is a multimedia play menu icon,” Yun paragraph 0164 line 9), the second control is configured to display, in the large-screen device, an interface of an application (“Referring to FIG. 20(20-3), the second controller 280 of the display device 200 controls the second multimedia content image 560 and the second and third subimages 520 and 530 corresponding to the second and third home screen images 320 and 330 to be displayed as the second screen image 500 on the monitor window 400 of the second display unit 151,” Yun paragraph 0169 lines 1-7) running in the electronic device (“the first controller 180 of the mobile terminal 100 plays back a corresponding multimedia content,” Yun paragraph 0165 lines 1-2), and after the sending, by the electronic device, the first window to the large-screen device, the method further comprises: receiving, by the electronic device (“a user can indirectly manipulate the mobile terminal 100 by manipulating the monitor window 400 of the display device 200 instead of manipulating the mobile terminal 100 in direct,” Yun paragraph 0084 lines 4-7), a third instruction sent by the large-screen device, wherein the third instruction is an instruction generated by the large-screen device in response to a third operation on the second control (“an object A of the first home screen image 310 is selected and executed,” Yun paragraph 0164 lines 4-5); generating, by the electronic device, a fourth window according to the third instruction, wherein the fourth window comprises the interface of the application running in the electronic device (“Referring to FIG. 20(20-3), the second controller 280 of the display device 200 controls the second multimedia content image 560 and the second and third subimages 520 and 530 corresponding to the second and third home screen images 320 and 330 to be displayed as the second screen image 500 on the monitor window 400 of the second display unit 151,” Yun paragraph 0169 lines 1-7); and sending, by the electronic device, the fourth window to the large-screen device (“Referring to FIG. 20(20-3), the second controller 280 of the display device 200 controls the second multimedia content image 560 and the second and third subimages 520 and 530 corresponding to the second and third home screen images 320 and 330 to be displayed as the second screen image 500 on the monitor window 400 of the second display unit 151,” Yun paragraph 0169 lines 1-7), so that the large-screen device displays a fourth user interface, wherein the fourth user interface comprises the fourth window (figure 20-3 part 400). As to independent claim 6, Yun discloses a display method, wherein the display method is applied to an information interaction system, the information interaction system comprises an electronic device and a large-screen device (“when the mobile terminal 100 and the display device 200 are connected to each other, the second controller 280 of the display device 200 can generate and display a monitor window 400 for the first screen image on the second display unit 251,” paragraph 0080 lines 1-5), and the method comprises: receiving, by the large-screen device, a first window sent by the electronic device (“when the mobile terminal 100 and the display device 200 are connected to each other, the second controller 280 of the display device 200 can generate and display a monitor window 400 for the first screen image on the second display unit 251,” paragraph 0080 lines 1-5), wherein the first window is a window generated by the electronic device based on preset information, the preset information comprises information about an application installed on the electronic device (“referring to FIG. 5(5-1), at least two home screen images are prepared in the mobile terminal 100 in advance,” paragraph 0100 lines 1-3), and the information about the application comprises at least one of a name, an icon, and a package name of the application (“at least one object such as an application icon, a menu icon, a file icon, a widget and the like can be provided to each of the home screen images,” paragraph 0101 lines 1-3); and displaying, by the large-screen device, a first user interface based on the first window (“when the mobile terminal 100 and the display device 200 are connected to each other, the second controller 280 of the display device 200 can generate and display a monitor window 400 for the first screen image on the second display unit 251,” paragraph 0080 lines 1-5), wherein the first user interface comprises the first window (figure 3 part 400), wherein the first user interface comprises the first window (figure 3 part 400), and the first window comprises any one of a plurality of desktops of the electronic device (“referring to FIG. 5(5-1), at least two home screen images are prepared in the mobile terminal 100 in advance,” paragraph 0100 lines 1-3). As to dependent claim 9, Yun further discloses a method wherein the first window further comprises a second control (“object A is a multimedia play menu icon,” Yun paragraph 0164 line 9), the second control is configured to display, in the large-screen device, an interface of an application (“Referring to FIG. 20(20-3), the second controller 280 of the display device 200 controls the second multimedia content image 560 and the second and third subimages 520 and 530 corresponding to the second and third home screen images 320 and 330 to be displayed as the second screen image 500 on the monitor window 400 of the second display unit 151,” Yun paragraph 0169 lines 1-7) running in the electronic device (“the first controller 180 of the mobile terminal 100 plays back a corresponding multimedia content,” Yun paragraph 0165 lines 1-2), and after the displaying, by the large-screen device, a first user interface based on the first window, the method further comprises: receiving, by the large-screen device (“a user can indirectly manipulate the mobile terminal 100 by manipulating the monitor window 400 of the display device 200 instead of manipulating the mobile terminal 100 in direct,” Yun paragraph 0084 lines 4-7), a third operation on the second control (“an object A of the first home screen image 310 is selected and executed,” Yun paragraph 0164 lines 4-5); and displaying, by the large-screen device, a fourth user interface in response to the third operation (figure 20-3 part 400), wherein the fourth user interface comprises the interface of the application running in the electronic device (“Referring to FIG. 20(20-3), the second controller 280 of the display device 200 controls the second multimedia content image 560 and the second and third subimages 520 and 530 corresponding to the second and third home screen images 320 and 330 to be displayed as the second screen image 500 on the monitor window 400 of the second display unit 151,” Yun paragraph 0169 lines 1-7). As to dependent claim 10, Yun further discloses a method wherein after the displaying, by the large-screen device, a fourth user interface, the method further comprises: receiving, by the large-screen device, a fourth operation on a first application interface in the fourth user interface (“the user command can be input by touching the object I of the third subimage 530 displayed on the second display unit 251,” Yun paragraph 0172 lines 7-9); and displaying, by the large-screen device, a fifth user interface (Yun figure 21-3 part 400) in response to the fourth operation, wherein the fifth user interface comprises the first application interface that is activated, and the first application interface is an interface of any application in interfaces of applications running in the electronic device (“in response to the control signal indicating that the user command for selecting the object I has been input, the first controller 180 of the mobile terminal 100 stops executing the multimedia play menu and then controls the message menu to be executed,” Yun paragraph 0176 lines 1-5). As to dependent claim 11, Yun further discloses a method wherein the fifth user interface comprises the second control (“object A is a multimedia play menu icon,” Yun paragraph 0164 line 9), and after the displaying, by the large-screen device, a fifth user interface, the method further comprises: receiving, by the large-screen device, a fifth operation (“a user can indirectly manipulate the mobile terminal 100 by manipulating the monitor window 400 of the display device 200 instead of manipulating the mobile terminal 100 in direct,” Yun paragraph 0084 lines 4-7) on the second control in the fifth user interface (“an object A of the first home screen image 310 is selected and executed,” Yun paragraph 0164 lines 4-5); and displaying, by the large-screen device, the fourth user interface (“Referring to FIG. 20(20-3), the second controller 280 of the display device 200 controls the second multimedia content image 560 and the second and third subimages 520 and 530 corresponding to the second and third home screen images 320 and 330 to be displayed as the second screen image 500 on the monitor window 400 of the second display unit 151,” Yun paragraph 0169 lines 1-7) in response to the fifth operation (in the non-multitasking option of Yun paragraphs 0176-0178 and Yun figure 21-3 (as opposed to the multitasking option of Yun paragraphs 0173-0175 and Yun figure 21-2) restarting executing the “multimedia play menu” will stop executing the “message menu” and return to Yun figure 20-3 part 400 (as opposed to Yun figure 21-2)). As to independent claim 13, Yun discloses an electronic device, wherein the electronic device comprises one or more processors (figure 1 part 180 “Controller”), a memory (figure 1 part 160 “Memory”), and a communications module (figure 1 part 110 “Wireless communication unit”), the memory and the communications module are coupled to the one or more processors (figure 1), the memory stores a computer program, and when the one or more processors execute the computer program (“the above-described methods can be implemented in a program recorded medium as computer-readable codes,” paragraph 0210 lines 1-3), the electronic device performs: obtaining, by the electronic device, preset information, wherein the preset information comprises information about an application installed on the electronic device (“referring to FIG. 5(5-1), at least two home screen images are prepared in the mobile terminal 100 in advance,” paragraph 0100 lines 1-3), and the information about the application comprises at least one of a name, an icon, and a package name of the application (“at least one object such as an application icon, a menu icon, a file icon, a widget and the like can be provided to each of the home screen images,” paragraph 0101 lines 1-3); generating, by the electronic device, a first window based on the preset information, wherein the first window comprises any one of a plurality of desktops of the electronic device (“referring to FIG. 5(5-1), at least two home screen images are prepared in the mobile terminal 100 in advance,” paragraph 0100 lines 1-3); and sending, by the electronic device, the first window to the large-screen device, so that the large-screen device displays a first user interface (“when the mobile terminal 100 and the display device 200 are connected to each other, the second controller 280 of the display device 200 can generate and display a monitor window 400 for the first screen image on the second display unit 251,” paragraph 0080 lines 1-5), wherein the first user interface comprises the first window (figure 3 part 400). As to independent claim 14, Yun discloses a large-screen device, wherein the large-screen device comprises one or more processors (figure 2 part 280 “Controller”), a memory (figure 2 part 260 “Memory”), and a communications module (figure 2 part 210 “Wireless communication unit”), the memory and the communications module are coupled to the one or more processors (figure 2), the memory stores a computer program, and when the one or more processors execute the computer program (“the above-described methods can be implemented in a program recorded medium as computer-readable codes,” paragraph 0210 lines 1-3), the large-screen device performs: receiving, by the large-screen device, a first window sent by the electronic device (“when the mobile terminal 100 and the display device 200 are connected to each other, the second controller 280 of the display device 200 can generate and display a monitor window 400 for the first screen image on the second display unit 251,” paragraph 0080 lines 1-5), wherein the first window is a window generated by the electronic device based on preset information, the preset information comprises information about an application installed on the electronic device (“referring to FIG. 5(5-1), at least two home screen images are prepared in the mobile terminal 100 in advance,” paragraph 0100 lines 1-3), and the information about the application comprises at least one of a name, an icon, and a package name of the application (“at least one object such as an application icon, a menu icon, a file icon, a widget and the like can be provided to each of the home screen images,” paragraph 0101 lines 1-3); and displaying, by the large-screen device, a first user interface based on the first window (“when the mobile terminal 100 and the display device 200 are connected to each other, the second controller 280 of the display device 200 can generate and display a monitor window 400 for the first screen image on the second display unit 251,” paragraph 0080 lines 1-5), wherein the first user interface comprises the first window (figure 3 part 400), wherein the first user interface comprises the first window (figure 3 part 400), and the first window comprises any one of a plurality of desktops of the electronic device (“referring to FIG. 5(5-1), at least two home screen images are prepared in the mobile terminal 100 in advance,” paragraph 0100 lines 1-3). Claim Rejections - 35 U.S.C. § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. §§ 102 and 103 (or as subject to pre-AIA 35 U.S.C. §§ 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. § 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 C.F.R. § 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. § 102(b)(2)(C) for any potential 35 U.S.C. § 102(a)(2) prior art against the later invention. Claims 2-3 and 7-8 are rejected under 35 U.S.C. § 103 as being unpatentable over Yun in view of Green (US 2010/0223563 A1). As to dependent claim 2, the rejection of claim 1 is incorporated. Yun further teaches a method wherein after the sending, by the electronic device, the first window to the large-screen device, the method further comprises: receiving, by the electronic device, a first instruction sent by the large-screen device, wherein the first instruction is an instruction generated by the large-screen device in response to a first operation, and the first instruction is used to obtain two desktops of the electronic device (“Referring to FIG. 13(13-2), the user can input a command slidably shift the first and second subimages 510 and 520 within the monitor window 400 via the second user input unit 230 of the display device 200 or via a touch gesture. For instance, the user command can be input by clicking the second screen image 520 and then dragging in one direction via the mouse. Alternatively, the user can use a touch and drag of flicking operation. If so, the second controller 280 of the display device 200 transmits a control signal, which indicates that the user command is for slidably shifting the first and second subimages 510 and 520, to the mobile terminal 100,” paragraph 0142 line 1 to paragraph 0143 line 4); generating, by the electronic device, a second window according to the first instruction, wherein the second window comprises any two of the plurality of desktops of the electronic device (“In response to the control signal, the first controller 180 of the mobile terminal 100 controls information on the first screen, which includes the information on the second and third home screen images 320 and 330, to be transmitted to the display device 200,” paragraph 0143 lines 4-9); and sending, by the electronic device, the second window to the large-screen device (“In response to the control signal, the first controller 180 of the mobile terminal 100 controls information on the first screen, which includes the information on the second and third home screen images 320 and 330, to be transmitted to the display device 200,” paragraph 0143 lines 4-9), so that the large-screen device displays a second user interface, wherein the second user interface comprises the second window (figure 13 part 400). Yun does not appear to expressly teach a method wherein the first window further comprises a first control, the first control is configured to adjust a quantity of desktops of the electronic device comprised in the first window; and the first operation is on the first control. Green teaches a method wherein the first window further comprises a first control, the first control is configured to adjust a quantity of desktops of the electronic device comprised in the first window; and the first operation is on the first control (“the user can, for example, select add view button 615 to create a new view representation,” paragraph 0029 lines 2-3). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the window of Yun to comprise the control of Green. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely allowing the user to arrange objects on different views as they desire (“If a user desires to place some of the set of objects 620 on a second view, the user can, for example, select add view button 615 to create a new view representation.… When the user has arranged the objects as desired …,” paragraph 0029 lines 1-3, 7-8). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 3, the rejection of claim 2 is incorporated. Yun/Green further teaches a method wherein the electronic device comprises three or more desktops, the second window further comprises the first control (“the user can, for example, select add view button 615 to create a new view representation,” Green paragraph 0029 lines 2-3), and after the sending, by the electronic device, the second window to the large-screen device, the method further comprises: receiving, by the electronic device, a second instruction sent by the large-screen device, wherein the second instruction is an instruction generated by the large-screen device in response to a second operation on the first control in the second window (“the user can, for example, select add view button 615 to create a new view representation,” Green paragraph 0029 lines 2-3), and the second instruction is used to obtain three desktops of the electronic device (“the user can touch and drag/flick the object H to the second subimage 520. If so, the second controller 280 transmits a control signal, which indicates that the user has shifted the object H of the third subimage 530 to the second subimage 520, to the mobile terminal 100,” Yun paragraph 0124 line 10 to paragraph 0125 line 4); generating, by the electronic device, a third window according to the second instruction, wherein the third window comprises any three of the plurality of desktops of the electronic device (“in response to the control signal, the first controller 180 controls the object H to be shifted to the second home screen image 320 from the third home screen image 330. In more detail, and referring to FIG. 10(10-1), when the second home screen image 320 is displayed as the output home screen image (i.e., the first screen image 300) on the first display unit 151 of the mobile terminal 100, the object H is shifted by sliding into the output home screen image from a right side of the first display unit 151 corresponding to the third home screen image 330,” Yun paragraph 0125 lines 4-7); and sending, by the electronic device, the third window to the large-screen device (“in response to the control signal, the first controller 180 controls the object H to be shifted to the second home screen image 320 from the third home screen image 330. In more detail, and referring to FIG. 10(10-1), when the second home screen image 320 is displayed as the output home screen image (i.e., the first screen image 300) on the first display unit 151 of the mobile terminal 100, the object H is shifted by sliding into the output home screen image from a right side of the first display unit 151 corresponding to the third home screen image 330,” Yun paragraph 0125 lines 4-7), so that the large-screen device displays a third user interface, wherein the third user interface comprises the third window (Yun figure 10 part 400). As to dependent claim 7, the rejection of claim 6 is incorporated. Yun further teaches a method wherein after the displaying, by the large-screen device, a first user interface based on the first window, the method further comprises: receiving, by the large-screen device, a first operation (“Referring to FIG. 13(13-2), the user can input a command slidably shift the first and second subimages 510 and 520 within the monitor window 400 via the second user input unit 230 of the display device 200 or via a touch gesture. For instance, the user command can be input by clicking the second screen image 520 and then dragging in one direction via the mouse. Alternatively, the user can use a touch and drag of flicking operation. If so, the second controller 280 of the display device 200 transmits a control signal, which indicates that the user command is for slidably shifting the first and second subimages 510 and 520, to the mobile terminal 100,” paragraph 0142 line 1 to paragraph 0143 line 4); and displaying, by the large-screen device, a second user interface in response to the first operation (“In response to the control signal, the first controller 180 of the mobile terminal 100 controls information on the first screen, which includes the information on the second and third home screen images 320 and 330, to be transmitted to the display device 200,” paragraph 0143 lines 4-9), wherein the second user interface comprises a second window, and the second window comprises any two of the plurality of desktops of the electronic device (figure 13 part 400). Yun does not appear to expressly teach a method wherein the first window further comprises a first control, the first control is configured to adjust a quantity of desktops of the electronic device comprised in the first window; and the first operation is on the first control. Green teaches a method wherein the first window further comprises a first control, the first control is configured to adjust a quantity of desktops of the electronic device comprised in the first window; and the first operation is on the first control (“the user can, for example, select add view button 615 to create a new view representation,” paragraph 0029 lines 2-3). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the window of Yun to comprise the control of Green. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely allowing the user to arrange objects on different views as they desire (“If a user desires to place some of the set of objects 620 on a second view, the user can, for example, select add view button 615 to create a new view representation.… When the user has arranged the objects as desired …,” paragraph 0029 lines 1-3, 7-8). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 8, the rejection of claim 7 is incorporated. Yun/Green further teaches a method wherein the electronic device comprises three or more desktops, the second window further comprises the first control (“the user can, for example, select add view button 615 to create a new view representation,” Green paragraph 0029 lines 2-3), and after the displaying, by the large-screen device, a second user interface, the method further comprises: receiving, by the large-screen device, a second operation on the first control in the second window (“the user can, for example, select add view button 615 to create a new view representation,” Green paragraph 0029 lines 2-3); and displaying, by the large-screen device, a third user interface in response to the second operation, wherein the third user interface comprises a third window (“in response to the control signal, the first controller 180 controls the object H to be shifted to the second home screen image 320 from the third home screen image 330. In more detail, and referring to FIG. 10(10-1), when the second home screen image 320 is displayed as the output home screen image (i.e., the first screen image 300) on the first display unit 151 of the mobile terminal 100, the object H is shifted by sliding into the output home screen image from a right side of the first display unit 151 corresponding to the third home screen image 330,” Yun paragraph 0125 lines 4-7), and the third window comprises any three of the plurality of desktops of the electronic device (Yun figure 10 part 400). Claims 5 and 12 are rejected under 35 U.S.C. § 103 as being unpatentable over Yun in view of Wasko (US 2011/0061010 A1). As to dependent claim 5, the rejection of claim 1 is incorporated. Yun does not appear to expressly teach a method wherein the first window further comprises a third control, and the third control is configured to search the first window for an application on the electronic device. Wasko teaches a method wherein the first window further comprises a third control, and the third control is configured to search the first window for an application on the electronic device (“The management application program typically allows the user to view, browse and/or search application programs on the PED 102 using the GUI on the computing device 104,” paragraph 0017 lines 8-11). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the window of Yun to comprise the search control of Wasko. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely making it easily find a particular application (“rather than having to look through all the pages and/or scroll through the list of user selectable application list, the sort feature 264 and/or the search feature 262 provides a user-friendly and efficient method to locate specific application icons and/or user selectable application,” Wasko paragraph 0034 lines 4-9). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). As to dependent claim 12, the rejection of claim 6 is incorporated. Yun does not appear to expressly teach a method wherein the first window further comprises a third control, and the third control is configured to search the first window for an application on the electronic device. Wasko teaches a method wherein the first window further comprises a third control, and the third control is configured to search the first window for an application on the electronic device (“The management application program typically allows the user to view, browse and/or search application programs on the PED 102 using the GUI on the computing device 104,” paragraph 0017 lines 8-11). Accordingly, it would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify the window of Yun to comprise the search control of Wasko. (1) The Examiner finds that the prior art included each claim element listed above, although not necessarily in a single prior art reference, with the only difference between the claimed invention and the prior art being the lack of actual combination of the elements in a single prior art reference. (2) The Examiner finds that one of ordinary skill in the art could have combined the elements as claimed by known software development methods, and that in combination, each element merely performs the same function as it does separately. (3) The Examiner finds that one of ordinary skill in the art would have recognized that the results of the combination were predictable, namely making it easily find a particular application (“rather than having to look through all the pages and/or scroll through the list of user selectable application list, the sort feature 264 and/or the search feature 262 provides a user-friendly and efficient method to locate specific application icons and/or user selectable application,” Wasko paragraph 0034 lines 4-9). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). Therefore, the rationale to support a conclusion that the claim would have been obvious is that the combining prior art elements according to known methods to yield predictable results to one of ordinary skill in the art. See MPEP § 2143(I)(A). Conclusion The prior art made of record and not relied upon is considered pertinent to Applicant’s disclosure: US 2013/0205244 A1 disclosing displaying multiple desktops Applicant is required under 37 C.F.R. § 1.111(c) to consider these references fully when responding to this action. It is noted that any citation to specific pages, columns, lines, or figures in the prior art references and any interpretation of the references should not be considered to be limiting in any way. A reference is relevant for all it contains and may be relied upon for all that it would have reasonably suggested to one having ordinary skill in the art. In re Heck, 699 F.2d 1331, 1332-33, 216 U.S.P.Q. 1038, 1039 (Fed. Cir. 1983) (quoting In re Lemelson, 397 F.2d 1006, 1009, 158 U.S.P.Q. 275, 277 (C.C.P.A. 1968)). In the interests of compact prosecution, Applicant is invited to contact the examiner via electronic media pursuant to USPTO policy outlined MPEP § 502.03. All electronic communication must be authorized in writing. Applicant may wish to file an Internet Communications Authorization Form PTO/SB/439. Applicant may wish to request an interview using the Interview Practice website: http://www.uspto.gov/patent/laws-and-regulations/interview-practice. Applicant is reminded Internet e-mail may not be used for communication for matters under 35 U.S.C. § 132 or which otherwise require a signature. A reply to an Office action may NOT be communicated by Applicant to the USPTO via Internet e-mail. If such a reply is submitted by Applicant via Internet e-mail, a paper copy will be placed in the appropriate patent application file with an indication that the reply is NOT ENTERED. See MPEP § 502.03(II). Any inquiry concerning this communication or earlier communications from the examiner should be directed to Ryan Barrett whose telephone number is 571 270 3311. The examiner can normally be reached 9:00am to 5:30pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, Applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor Michelle Bechtold can be reached at 571 431 0762. The fax phone number for the organization where this application or proceeding is assigned is 571 273 8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /Ryan Barrett/ Primary Examiner, Art Unit 2148
Read full office action

Prosecution Timeline

Jun 12, 2024
Application Filed
Jan 02, 2026
Non-Final Rejection — §102, §103, §DP (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602612
INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND COMPUTER PROGRAM PRODUCT
2y 5m to grant Granted Apr 14, 2026
Patent 12585525
BUSINESS LANGUAGE PROCESSING USING LoQoS AND rb-LSTM
2y 5m to grant Granted Mar 24, 2026
Patent 12585506
SYSTEM AND METHOD FOR DETERMINATION OF MODEL FITNESS AND STABILITY FOR MODEL DEPLOYMENT IN AUTOMATED MODEL GENERATION
2y 5m to grant Granted Mar 24, 2026
Patent 12585990
HETEROGENEOUS COMPUTE-BASED ARTIFICIAL INTELLIGENCE MODEL PARTITIONING
2y 5m to grant Granted Mar 24, 2026
Patent 12585975
STATE MAPS FOR QUANTUM COMPUTING
2y 5m to grant Granted Mar 24, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
99%
With Interview (+43.7%)
3y 1m
Median Time to Grant
Low
PTA Risk
Based on 409 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month