DETAILED ACTION
Priority
This action is in response to the U.S. filing dated 09 June 2023 which is a bypass continuation of PCT/CN2021/128562, dated 04 November 2021, which claims a foreign priority date of 11 December 2020. Claims 1-20 are pending and have been considered below.
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Information Disclosure Statement
The information disclosure statements (IDS) submitted on 09 June 2023, 07 March 2024 and 23 July 2024 have been received, entered into the record, and considered. The submission is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statements are being considered by the examiner.
Title of Invention
The title of the invention is not descriptive. A new title is required that is clearly indicative of the invention to which the claims are directed. 37 C.F.R. 1.72(a) states: "The title of the invention may not exceed 500 characters in length and must be as short and specific as possible" (emphasis added). Thus, the title of the invention is not sufficiently descriptive.
The examiner suggests the following title: INFORMATION PROCESSING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM FOR AGGREGATING APPLICATION LABELS.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-4, 13-15 and 17-20 are rejected under 35 U.S.C. 103 as being unpatentable over Nicholson et al. (US 2012/0227007 A1) in view of Shen et al. (US 2014/0165012 A1).
As for independent claim 1, Nicholson teaches a method comprising:
receiving a first gesture [(e.g. see Nicholson paragraph 0028) ”Responsive to the user clicking on the "Taxes" common taskbar icon”].
displaying, in response to the first gesture acting on a label aggregate in a current interface, at least one label in the label aggregate, each label in the label aggregate being generated based on an application interface [(e.g. see Nicholson paragraph 0028 and Fig. 3B numeral 350B, 360B) ”Responsive to the user clicking on the "Taxes" common taskbar icon … The taskbar icons are grouped by task, as determined via one of the methods described herein. Thus, the "Taxes" icon, involving (as described with reference to FIG. 3A) a user's home taxes project applications and files, includes two applications. In this case, the applications are TURBOTAX and ACROBAT, and the files are a TURBOTAX file and a ".pdf" file 370B. Responsive to the user clicking on the "Taxes" common taskbar icon, these files are listed in a popup 360B and thus grouped together, allowing the user to easily locate the files of "Taxes"”].
displaying … a target application interface corresponding to a target label when the first gesture corresponds to the target label [(e.g. see Nicholson paragraphs 0001, 0030, 0031 and claims) ”an embodiment associates WINDOW 1 and WINDOW 2 as being associated with a common task 450 and groups taskbar icons for these applications in a common taskbar icon 460. The temporal grouping may result in a display as depicted for example in FIG. 3B … switching between open applications (repeatedly selecting or bringing different applications to the foreground) … further comprising: responsive to user interaction with the popup listing, bringing an object in the popup listing interacted with to a foreground view … clicking on one of the open document's taskbar icon in the popup view causes that document to be brought to the foreground in the display”].
wherein the target label is one of the at least one label [(e.g. see Nicholson paragraph 0028 and Fig. 3B numerals 370B) ”Responsive to the user clicking on the "Taxes" common taskbar icon, these files are listed in a popup 360B and thus grouped together, allowing the user to easily locate the files of "Taxes" and switch back and forth between them while working on a home taxes project … the applications are TURBOTAX and ACROBAT, and the files are a TURBOTAX file and a ".pdf" file 370B”].
Nicholson does not specifically teach displaying, in response to detecting an end of the first gesture, a target application. However, in the same field of invention or solving similar problems, Shen teaches:
displaying, in response to detecting an end of the first gesture, a target application [(e.g. see Shen paragraphs 0011, 0014 and Figs. 1A-C) ”launch a desired application by first sliding an icon from a starting location along a first track … and then sliding the icon toward an application icon located near the end of a second track (an application selection gesture) … the user interface 101 comprises a plurality of tracks 115-122, a main track 115 connected to spurs 116-122, along which an icon 124 starting at a starting location 126 can be moved. Applications can be associated with the spurs 116-122 (or ends of the spurs). Application icons 130-136 are located near the ends of the spurs 116-122”].
Therefore, considering the teachings of Nicholson and Shen, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add displaying, in response to detecting an end of the first gesture, a target application, as taught by Shen, to the teachings of Nicholson because by providing a single gesture a user is spared from having to apply multiple gestures to achieve the same result (e.g. see Shen paragraph 0011).
As for dependent claim 2, Nicholson and Shen teach the method as described in claim 1 and Nicholson further teaches:
wherein said displaying, in response to the first gesture acting on the label aggregate in the current interface, the at least one label in the label aggregate comprises: determining a starting point position of the first gesture [(e.g. see Nicholson paragraph 0028 and Fig. 3B) ”Responsive to the user clicking on the "Taxes" common taskbar icon”].
and displaying the at least one label in the label aggregate in response to the starting point position being located on the label aggregate in the current interface [(e.g. see Nicholson paragraph 0028 and Fig. 3B numeral 350B, 360B) ”Responsive to the user clicking on the "Taxes" common taskbar icon … The taskbar icons are grouped by task, as determined via one of the methods described herein. Thus, the "Taxes" icon, involving (as described with reference to FIG. 3A) a user's home taxes project applications and files, includes two applications. In this case, the applications are TURBOTAX and ACROBAT, and the files are a TURBOTAX file and a ".pdf" file 370B. Responsive to the user clicking on the "Taxes" common taskbar icon, these files are listed in a popup 360B and thus grouped together, allowing the user to easily locate the files of "Taxes"”].
As for dependent claim 3, Nicholson and Shen teach the method as described in claim 1, but Nicholson does not specifically teach the following limitations. However, Shen teaches:
wherein said displaying, in response to detecting an end of the first gesture, the target application interface corresponding to the target label when the first gesture corresponds to the target label comprises: determining a first gesture parameter of the first gesture in response to the end of the first gesture [(e.g. see Shen paragraphs 0011, 0014 and Figs. 1A-C) ”launch a desired application by first sliding an icon from a starting location along a first track … and then sliding the icon toward an application icon located near the end of a second track (an application selection gesture) … the user interface 101 comprises a plurality of tracks 115-122, a main track 115 connected to spurs 116-122, along which an icon 124 starting at a starting location 126 can be moved. Applications can be associated with the spurs 116-122 (or ends of the spurs). Application icons 130-136 are located near the ends of the spurs 116-122”].
determining, based on the first gesture parameter, the target label selected by the first gesture from the at least one label and displaying the target application interface corresponding to the target label [(e.g. see Shen paragraphs 0014, 0016 and Figs. 1A-C) ”Applications can be associated with the spurs 116-122 (or ends of the spurs). Application icons 130-136 are located near the ends of the spurs 116-122. An application can be software separate from the computing device's operating system, such as a word processing, spreadsheet, gaming or social media application; or software that is a component or feature of an operating system, such as a phone, contact book or messaging application. Further, an application can be a short cut to a file, such as a web page bookmark, audio file, video file or word processing document, where selection of the short cut causes the application associated with the file to be launched and the file to be loaded into (played, etc.) the application. For example, selecting a web page bookmark icon will cause the associated web browser to be launched and the selected web page to be loaded, selecting a video icon will cause a video player to be launched and the selected video to be played, and selecting a settings icon will cause the device to navigate to a settings menu. The application icons 130-136 comprise a messaging icon 130, web browser icon 131, email icon 132, newspaper web page bookmark icon 133, phone icon 134, camera icon 135 and contact book icon 136 … launch a messaging application with a single gesture, a user can first move the icon 124 horizontally from the starting position 126 to the point where the main track 115 and the spur 116 meet (a portion of the unlock gesture) and then upwards vertically along spur 116 to the end of spur 116 (an application selection gesture), as indicated by path 140”].
The motivation to combine is the same as that used for claim 1.
As for dependent claim 4, Nicholson and Shen teach the method as described in claim 3 and Nicholson further teaches:
and wherein said displaying the target application interface corresponding to the target label comprises: displaying a target application floating window corresponding to the target label [(e.g. see Nicholson paragraphs 0001, 0029, 0031) ”a user opening a plurality of applications (Application 1 and Application 2) in a plurality of views (WINDOW 1 and WINDOW 2) at 410 and 420, an embodiment detects the user switching between WINDOW 1 and WINDOW 2 at 430 … bringing different applications to the foreground … Left clicking on the word processing application's common taskbar icon shows the word processing application documents currently open in a popup view. Further, clicking on one of the open document's taskbar icon in the popup view causes that document to be brought to the foreground in the display”].
Nicholson does not specifically teach the following limitations. However, Shen teaches:
wherein the first gesture parameter comprises a movement direction and a movement distance [(e.g. see Shen paragraphs 0016, 0019, 0025) ”a user can first move the icon 124 horizontally from the starting position 126 to the point where the main track 115 and the spur 116 meet (a portion of the unlock gesture) and then upwards vertically along spur 116 to the end of spur 116 (an application selection gesture), as indicated by path 14 … spur length, the distance between spurs and/or the distance from the starting location of the icon to the nearest spur, as well as additional unlock-and-launch user interface characteristics can be selected to reduce the likelihood that the icon could be unintentionally moved from the starting position to the end of one of one of the spurs … after a distance traced by the touching object on the touchscreen has exceeded a specified distance”].
wherein said determining, based on the first gesture parameter, the target label selected by the first gesture from the at least one label comprises: determining a label from the at least one label corresponding to the movement direction as a pre-selected label; and determining the pre-selected label as the target label selected by the first gesture from the at least one label, in response to the movement distance reaching a predetermined label selection distance threshold [(e.g. see Shen paragraphs 0017, 0019, 0025, 0028) ”main track-and-spur configurations for unlocking the computing device 110 and launching an application with a single gesture. In FIG. 1B, a main track 150 is oriented vertically and the spurs are oriented horizontally. Thus, a user first moves the icon 124 vertically along the main track 150 and then horizontally along one of the spurs to select an application to be launched. In FIG. 1C, the main track is oriented vertically and the spurs are arranged in a non-orthogonal manner relative to the main track … "Configure Spur" to change characteristics of the spur associated with the selected application icon. A user may wish to change spur characteristics to, for example, make it more convenient for the user to select a particular application. Configurable spur characteristics include spur length and the orientation of a spur relative to another track … spur length, the distance between spurs and/or the distance from the starting location of the icon to the nearest spur, as well as additional unlock-and-launch user interface characteristics can be selected to reduce the likelihood that the icon could be unintentionally moved from the starting position to the end of one of one of the spurs. In some embodiments, the icon can automatically return to the starting position once the touching object (finger, stylus, etc.) that moved the icon away from the starting position is no longer in contact with the touchscreen … after a distance traced by the touching object on the touchscreen has exceeded a specified distance”].
The motivation to combine is the same as that used for claim 1.
As for dependent claim 13, Nicholson and Shen teach the method as described in claim 3, but Nicholson does not specifically teach the following limitations. However, Shen teaches:
wherein the first gesture parameter comprises a movement distance; wherein the method further comprises: determining a standard interface of an application corresponding to the target label in response to the movement distance reaching a predetermined standard interface distance threshold and wherein said displaying the target application interface corresponding to the target label comprises: displaying the standard interface [(e.g. see Shen paragraphs 0015, 0019, 0025) ”launch a particular application by applying a single gesture to the touchscreen 105 … spur length, the distance between spurs and/or the distance from the starting location of the icon to the nearest spur, as well as additional unlock-and-launch user interface characteristics can be selected to reduce the likelihood that the icon could be unintentionally moved from the starting position to the end of one of one of the spurs. In some embodiments, the icon can automatically return to the starting position once the touching object (finger, stylus, etc.) that moved the icon away from the starting position is no longer in contact with the touchscreen … after a distance traced by the touching object on the touchscreen has exceeded a specified distance”].
The motivation to combine is the same as that used for claim 1.
As for dependent claim 14, Nicholson and Shen teach the method as described in claim 3, but Nicholson does not specifically teach the following limitations. However, Shen teaches:
wherein the first gesture parameter comprises an end point position; wherein the method further comprises: determining a standard interface corresponding to an application of the target label in response to the end point position being in a predetermined standard interface trigger area and wherein said displaying the target application interface corresponding to the target label comprises: displaying the standard interface [(e.g. see Shen paragraphs 0014, 0016 and Figs. 1A-C) ”Applications can be associated with the spurs 116-122 (or ends of the spurs). Application icons 130-136 are located near the ends of the spurs 116-122. An application can be software separate from the computing device's operating system, such as a word processing, spreadsheet, gaming or social media application; or software that is a component or feature of an operating system, such as a phone, contact book or messaging application … launch a messaging application with a single gesture, a user can first move the icon 124 horizontally from the starting position 126 to the point where the main track 115 and the spur 116 meet (a portion of the unlock gesture) and then upwards vertically along spur 116 to the end of spur 116 (an application selection gesture), as indicated by path 140”].
The motivation to combine is the same as that used for claim 1.
As for dependent claim 15, Nicholson and Shen teach the method as described in claim 3, but Nicholson does not specifically teach the following limitation. However, Shen teaches:
wherein the first gesture parameter comprises a gesture pause time length; wherein the method further comprises: determining a standard interface corresponding to an application of the target label in response to the gesture pause time length reaching a predetermined standard interface time length threshold; and wherein said displaying the target application interface corresponding to the target label comprises: displaying the standard interface [(e.g. see Shen paragraph 0024) ”the user supplies an application selection gesture by moving the touching object from the ending location 240 to a region 260 occupied by an application icon 270. To complete the single gesture, the user can lift the touching object from the touchscreen 200. In response, the computing device determines the application icon 270 to be the selected application icon, and executes an associated application. In alternative embodiments, an application can be launched when the touching object is first moved to a location where an application icon is displayed or when the touching object has settled on a region where an application icon is displayed for a specified amount of time (e.g., one-quarter, one-half or one second)”].
The motivation to combine is the same as that used for claim 1.
As for dependent claim 17, Nicholson and Shen teach the method as described in claim 1 and Nicholson further teaches:
further comprising: obtaining configuration information corresponding to the label aggregate; configuring the label aggregate based on the configuration information; and presenting the label aggregate with a display effect corresponding to the configuration information in the current interface [(e.g. see Nicholson paragraphs 0026, 0028 and Fig. 3B) ”Finally, for "Taxes", defined as a task, taskbar icons for applications TURBOTAX (7) and ACROBAT (8) are grouped in the taskbar together … Thus, the "Taxes" icon, involving (as described with reference to FIG. 3A) a user's home taxes project applications and files, includes two applications”].
As for independent claim 18, Nicholson and Shen teach a device. Claim 18 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1.
As for dependent claim 19, Nicholson and Shen teach the device as described in claim 18; further, claim 19 discloses substantially the same limitations as claim 2. Therefore, it is rejected with the same rational as claim 2.
As for independent claim 20, Nicholson and Shen teach a non-transitory computer-readable storage medium. Claim 18 discloses substantially the same limitations as claim 1. Therefore, it is rejected with the same rational as claim 1.
Claims 5 and 10 are rejected under 35 U.S.C. 103 as being unpatentable over Nicholson et al. (US 2012/0227007 A1) in view of Shen et al. (US 2014/0165012 A1), as applied to claim 3 above, and further in view of Desai et al. (US 2012/0084713 A1).
As for dependent claim 5, Nicholson and Shen teach the method as described in claim 3, but Nicholson does not specifically teach the following limitations. However, Shen teaches:
wherein the first gesture parameter comprises an end point position [(e.g. see Shen paragraphs 0011, 0014 and Figs. 1A-C) ”launch a desired application by first sliding an icon from a starting location along a first track … and then sliding the icon toward an application icon located near the end of a second track (an application selection gesture) … the user interface 101 comprises a plurality of tracks 115-122, a main track 115 connected to spurs 116-122, along which an icon 124 starting at a starting location 126 can be moved. Applications can be associated with the spurs 116-122 (or ends of the spurs). Application icons 130-136 are located near the ends of the spurs 116-122”].
Nicholson and Shen do not specifically teach wherein said determining, based on the first gesture parameter, the target label selected by the first gesture from the at least one label comprises: determining a label corresponding to an application floating window trigger area as the target label selected by the first gesture from the at least one label, in response to the end point position being in the predetermined application floating window trigger area and wherein said displaying the target application interface corresponding to the target label comprises: displaying a target application floating window corresponding to the target label. However, in the same field of invention or solving similar problems, Desai teaches:
wherein said determining, based on the first gesture parameter, the target label selected by the first gesture from the at least one label comprises: determining a label corresponding to an application floating window trigger area as the target label selected by the first gesture from the at least one label, in response to the end point position being in the predetermined application floating window trigger area and wherein said displaying the target application interface corresponding to the target label comprises: displaying a target application floating window corresponding to the target label [(e.g. see Desai paragraphs 0082, 0084 and Figs. 6D and 10) ”FIGS. 6A through 6E illustrate example previews for a taskbar group. In some embodiments, previews for the taskbar groups can be shown, for example, responsive to a user clicking on or hovering over an icon on the taskbar … FIG. 6C illustrates a taskbar 620 with a thumbnail preview 621 for taskbar group 622. Thumbnail preview 621 may include a title portion 623 having an icon and/or text (e.g., a portion of the window title). Thumbnail preview 621 may also include a snapshot portion 624 that provides a snapshot, or otherwise reflects the appearance, of content of the associated graphical window … A user may be able to select a particular preview entry or thumbnail to bring the corresponding window to the foreground of the desktop environment. Microsoft Windows and Mac OS are two examples among many operating systems that provide preview functionality”].
Therefore, considering the teachings of Nicholson, Shen and Desai, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add wherein said determining, based on the first gesture parameter, the target label selected by the first gesture from the at least one label comprises: determining a label corresponding to an application floating window trigger area as the target label selected by the first gesture from the at least one label, in response to the end point position being in the predetermined application floating window trigger area and wherein said displaying the target application interface corresponding to the target label comprises: displaying a target application floating window corresponding to the target label, as taught by Desai, to the teachings of Nicholson and Shen because presenting rendered static or dynamic thumbnails of those windows allows a user to intuitively select a window to make active (e.g. see Desai paragraph 0004).
As for dependent claim 10, Nicholson and Shen teach the method as described in claim 1, but do not specifically teach the following limitations. However, Desai teaches:
further comprising: displaying at least one application floating window corresponding to the at least one label in the label aggregate, in response to the first gesture acting on the label aggregate in the current interface; displaying a target application floating window in the current interface when the first gesture corresponds to the target application floating window, in response to detecting the end of the first gesture, wherein the target application floating window is one of the at least one application floating window [(e.g. see Desai paragraphs 0082, 0084 and Figs. 6D and 10) ”FIGS. 6A through 6E illustrate example previews for a taskbar group. In some embodiments, previews for the taskbar groups can be shown, for example, responsive to a user clicking on or hovering over an icon on the taskbar … FIG. 6C illustrates a taskbar 620 with a thumbnail preview 621 for taskbar group 622. Thumbnail preview 621 may include a title portion 623 having an icon and/or text (e.g., a portion of the window title). Thumbnail preview 621 may also include a snapshot portion 624 that provides a snapshot, or otherwise reflects the appearance, of content of the associated graphical window … A user may be able to select a particular preview entry or thumbnail to bring the corresponding window to the foreground of the desktop environment. Microsoft Windows and Mac OS are two examples among many operating systems that provide preview functionality”].
The motivation to combine is the same as that used for claim 5.
Claims 6-9, 12 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Nicholson et al. (US 2012/0227007 A1) in view of Shen et al. (US 2014/0165012 A1), as applied to claim 1 above, and further in view of Malamud et al. (US 5,825,357).
As for dependent claim 6, Nicholson and Shen teach the method as described in claim 1, but do not specifically teach further comprising: receiving a second gesture; generating, in response to the second gesture acting on an application interface displayed in the current interface, a label corresponding to the application interface based on the application interface and accommodating the label corresponding to the application interface in the label aggregate for display. However, in the same field of invention or solving similar problems, Malamud teaches:
further comprising: receiving a second gesture; generating, in response to the second gesture acting on an application interface displayed in the current interface, a label corresponding to the application interface based on the application interface and accommodating the label corresponding to the application interface in the label aggregate for display [(e.g. see Malamud col 8 lines 29-40, col 9 lines 1-15 and Fig. 2) ”The act of causing an independent object to become associated with the Tray Object 50 is referred to herein as "docking". One manner in which a user invokes the docking process on an independent object is by guiding the mouse controlled pointer to a window displayed in the applications section 14 of the display screen … The user docks an object by selecting the window using the mouse controlled pointer, dragging the window to the panel area 24 and dropping the window in the panel area 24 … the computer system determines that an embedded object did not previously exist in the Tray Object 50, then control passes to step 208. After completion of step 208, control passes to step 210 wherein the computer system determines whether an embedded object previously existed prior to the embedding of the independent object. If no embedded object previously existed in the Tray Object 50, then control passes to step 212 wherein the computer system adjusts the allocation of the space in the tray section 12 to accommodate the portion of the tray section 12 allocated for displaying an embedded object and redraws the panel area to reflect the new space allocation. Thereafter, control passes to step 214 wherein the computer system informs the independent object that it is now embedded in the Tray Object 50 and sets the display state for the embedded object to "visible" to enable display of the embedded object within the tray section 12”].
Therefore, considering the teachings of Nicholson, Shen and Malamud, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add further comprising: receiving a second gesture; generating, in response to the second gesture acting on an application interface displayed in the current interface, a label corresponding to the application interface based on the application interface and accommodating the label corresponding to the application interface in the label aggregate for display, as taught by Malamud, to the teachings of Nicholson and Shen because it allows computer resources to be quickly and easily accessed and enables the user to easily alter the modifiable set of computer resources (e.g. see Malamud col 2 lines 2-3, 46-47).
As for dependent claim 7, Nicholson, Shen and Malamud teach the method as described in claim 6, but Nicholson and Shen do not specifically teach the following limitations. However, Malamud teaches:
wherein said generating, in response to the second gesture acting on the application interface displayed in the current interface, the label corresponding to the application interface based on the application interface comprises: determining an interface distribution parameter of the application interface under the action of the second gesture and generating the label corresponding to the application interface based on the application interface, in response to the interface distribution parameter satisfying an interface conversion condition [(e.g. see Malamud col 8 lines 29-40, col 9 lines 1-15 and Fig. 2) ”The act of causing an independent object to become associated with the Tray Object 50 is referred to herein as "docking". One manner in which a user invokes the docking process on an independent object is by guiding the mouse controlled pointer to a window displayed in the applications section 14 of the display screen … The user docks an object by selecting the window using the mouse controlled pointer, dragging the window to the panel area 24 and dropping the window in the panel area 24 … the computer system determines that an embedded object did not previously exist in the Tray Object 50, then control passes to step 208. After completion of step 208, control passes to step 210 wherein the computer system determines whether an embedded object previously existed prior to the embedding of the independent object. If no embedded object previously existed in the Tray Object 50, then control passes to step 212 wherein the computer system adjusts the allocation of the space in the tray section 12 to accommodate the portion of the tray section 12 allocated for displaying an embedded object and redraws the panel area to reflect the new space allocation. Thereafter, control passes to step 214 wherein the computer system informs the independent object that it is now embedded in the Tray Object 50 and sets the display state for the embedded object to "visible" to enable display of the embedded object within the tray section 12”]. Examiner notes that the window must cross a boundary into the tray because it must be dropped onto the tray.
The motivation to combine is the same as that used for claim 6.
As for dependent claim 8, Nicholson, Shen and Malamud teach the method as described in claim 6, but Nicholson and Shen do not specifically teach the following limitations. However, Malamud teaches:
wherein said generating, in response to the second gesture acting on the application interface displayed in the current interface, the label corresponding to the application interface based on the application interface comprises: determining a second gesture parameter of the second gesture; generating the label corresponding to the application interface based on the application interface, in response to the second gesture parameter satisfying a label generation condition [(e.g. see Malamud col 6 line 65 – col 7 line 3, col 8 lines 29-40, col 9 lines 1-15 and Fig. 2) ”The act of causing an independent object to become associated with the Tray Object 50 is referred to herein as "docking". One manner in which a user invokes the docking process on an independent object is by guiding the mouse controlled pointer to a window displayed in the applications section 14 of the display screen … The user docks an object by selecting the window using the mouse controlled pointer, dragging the window to the panel area 24 and dropping the window in the panel area 24 … the computer system determines that an embedded object did not previously exist in the Tray Object 50, then control passes to step 208. After completion of step 208, control passes to step 210 wherein the computer system determines whether an embedded object previously existed prior to the embedding of the independent object. If no embedded object previously existed in the Tray Object 50, then control passes to step 212 wherein the computer system adjusts the allocation of the space in the tray section 12 to accommodate the portion of the tray section 12 allocated for displaying an embedded object and redraws the panel area to reflect the new space allocation. Thereafter, control passes to step 214 wherein the computer system informs the independent object that it is now embedded in the Tray Object 50 and sets the display state for the embedded object to "visible" to enable display of the embedded object within the tray section 12 … the panel area 24 is the portion of the tray section 12 utilized by a user to dock independent objects including: open electronic folders such as Bob's Folder 40, open word documents such as My Document 42”]. Examiner notes that when the window is dropped into the tray the label is created and added to the tray.
The motivation to combine is the same as that used for claim 6.
As for dependent claim 9, Nicholson and Shen teach the method as described in claim 1, but do not specifically teach the following limitation. However, Malamud teaches:
wherein said displaying the at least one label in the label aggregate in response to the first gesture acting on the label aggregate in the current interface comprises: displaying the at least one label and a predetermined label vacancy area in the label aggregate in response to the first gesture acting on the label aggregate in the current interface and the label aggregate being unsaturated and wherein the method further comprises: generating a to-be-accommodated label based on an application corresponding to the current interface when the first gesture correspond to the label vacancy area, in response to detecting the end of the first gesture and accommodating the to-be-accommodated label in the label vacancy area of the label aggregate [(e.g. see Malamud col 8 line 65 – col 9 line 15, col 1 lines 55-60) ”the computer system determines that an embedded object did not previously exist in the Tray Object 50, then control passes to step 208. After completion of step 208, control passes to step 210 wherein the computer system determines whether an embedded object previously existed prior to the embedding of the independent object. If no embedded object previously existed in the Tray Object 50, then control passes to step 212 wherein the computer system adjusts the allocation of the space in the tray section 12 to accommodate the portion of the tray section 12 allocated for displaying an embedded object and redraws the panel area to reflect the new space allocation. Thereafter, control passes to step 214 wherein the computer system informs the independent object that it is now embedded in the Tray Object 50 and sets the display state for the embedded object to "visible" to enable display of the embedded object within the tray section 12 … designates a minimum size requirement. Before docking, the CPU 2 determines whether the available space in the present panel exceeds the minimum size requirement for the application. If the CPU 2 determines there is insufficient space available in the presently displayed panel, then the CPU 2 will prevent docking of the application within the current panel area 24”].
The motivation to combine is the same as that used for claim 6.
As for dependent claim 12, Nicholson and Shen teach the method as described in claim 1, but do not specifically teach the following limitations. However, Malamud teaches:
wherein the label aggregate comprises a single label and wherein the method further comprises: displaying in the current interface an application floating window corresponding to the single label in the label aggregate, in response to a click trigger operation acting on the label aggregate in the current interface [(e.g. see Malamud col 4 lines 32-34, col 10 lines 23-36) ”The program manager enables the user to cause any of the listed applications to be surfaced (i.e., displayed in the applications section 14). In the docked state, the program manager interface, when displayed in the panel area 24, comprises a set of mouse-controlled pointer-sensitive buttons corresponding to the list of running application programs. The user selects the running application programs for display in the applications section 14 by guiding the pointer to a button representing the desired application displayed in the tray section 12 occupied by the docked program manager and thereafter selecting the application by clicking the mouse … An application window may occupy the entire applications section 14 or a smaller portion of the entire applications section 14”].
The motivation to combine is the same as that used for claim 6.
As for dependent claim 16, Nicholson and Shen teach the method as described in claim 1, but do not specifically teach the following limitation. However, Malamud teaches:
further comprising: dragging the label aggregate to an aggregate fixing position corresponding to an aggregate dragging operation, in response to the aggregate dragging operation acting on the label aggregate in the current interface [(e.g. see Malamud col 3 lines 20-23, col 5 lines 15-24) ”The computer system enters step 100 in response to a request by a user to modify the position and/or size of the tray section 12. Next, at step 102 the computer system saves the current position of the tray section 12 as a temporary variable. Next, at step 104, the computer system saves the requested new position of the tray section 12 and control passes to step 106 … The display set modifier enables the user to modify the set of computer resources associated with the tray section by a mouse controlled drag and drop operation”].
The motivation to combine is the same as that used for claim 1.
Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Nicholson et al. (US 2012/0227007 A1) in view of Shen et al. (US 2014/0165012 A1), as applied to claim 3 above, and further in view of Meredith et al. (US 2011/0314389 A1).
As for dependent claim 11, Nicholson and Shen teach the method as described in claim 1, but do not specifically teach further comprising: displaying an application notification message of an application corresponding to the label in a notification area associated with the label. However, in the same field of invention or solving similar problems, Meredith teaches:
further comprising: displaying an application notification message of an application corresponding to the label in a notification area associated with the label [(e.g. see Meredith paragraph 0102 and Fig. 6b) ”Badges 106 are applied to the icons or stacks of icons in the toolbar to provide notifications to the user. Badges 108 are also applied to the icons in the pop up showing an expanded stack of icons. The badges 106 and 108 are implemented through different mechanisms. The badges 106 overlaid on icons in the taskbar are implemented by the Application Platform communicating with the operating system of the communication device using an API within the integration layer that can be called by the cross-platform application. The badges 108 that appear in a pop up (e.g. an expanded stack of icons or another pop up) are implemented by the rendering engine layer of the Application Platform. The use of "pop up pages" to provide notifications and user interfaces is discussed further below”].
Therefore, considering the teachings of Nicholson, Shen and Meredith, it would have been obvious to one having ordinary skill in the art before the effective filing date of the claimed invention to add further comprising: displaying an application notification message of an application corresponding to the label in a notification area associated with the label, as taught by Meredith, to the teachings of Nicholson and Shen because it increases the efficiency of a user's task completion (e.g. see Meredith paragraph 0007).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Patent 9,817,548 B2 issued to Lai et al. on 14 November 2017. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g. grouping of icons representing open windows and applications that can be called up using gestures).
U.S. PGPub 2004/0066414 A1 issued to Czerwinski et al. on 08 April 2004. The subject matter disclosed therein is pertinent to that of claims 1-20 (e.g. grouping windows as icons in a sidebar/taskbar).
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CHRISTOPHER J FIBBI whose telephone number is (571)-270-3358. The examiner can normally be reached Monday - Thursday (8am-6pm).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William Bashore can be reached at (571)-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CHRISTOPHER J FIBBI/Primary Examiner, Art Unit 2174