DETAILED ACTION
Status of the Claims
The preliminary amendment dated 11/17/25 is entered. Claim 1 is amended. Claims 1-20 are pending.
Foreign Priority
Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55.
Information Disclosure Statements
The information disclosure statement (IDS) submitted on 10/30/25 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Double Patenting
The nonstatutory double patenting rejection is based on a judicially created doctrine grounded in public policy (a policy reflected in the statute) so as to prevent the unjustified or improper timewise extension of the “right to exclude” granted by a patent and to prevent possible harassment by multiple assignees. A nonstatutory double patenting rejection is appropriate where the conflicting claims are not identical, but at least one examined application claim is not patentably distinct from the reference claim(s) because the examined application claim is either anticipated by, or would have been obvious over, the reference claim(s). See, e.g., In re Berg, 140 F.3d 1428, 46 USPQ2d 1226 (Fed. Cir. 1998); In re Goodman, 11 F.3d 1046, 29 USPQ2d 2010 (Fed. Cir. 1993); In re Longi, 759 F.2d 887, 225 USPQ 645 (Fed. Cir. 1985); In re Van Ornum, 686 F.2d 937, 214 USPQ 761 (CCPA 1982); In re Vogel, 422 F.2d 438, 164 USPQ 619 (CCPA 1970); In re Thorington, 418 F.2d 528, 163 USPQ 644 (CCPA 1969).
A timely filed terminal disclaimer in compliance with 37 CFR 1.321(c) or 1.321(d) may be used to overcome an actual or provisional rejection based on nonstatutory double patenting provided the reference application or patent either is shown to be commonly owned with the examined application, or claims an invention made as a result of activities undertaken within the scope of a joint research agreement. See MPEP § 717.02 for applications subject to examination under the first inventor to file provisions of the AIA as explained in MPEP § 2159. See MPEP § 2146 et seq. for applications not subject to examination under the first inventor to file provisions of the AIA . A terminal disclaimer must be signed in compliance with 37 CFR 1.321(b).
The filing of a terminal disclaimer by itself is not a complete reply to a nonstatutory double patenting (NSDP) rejection. A complete reply requires that the terminal disclaimer be accompanied by a reply requesting reconsideration of the prior Office action. Even where the NSDP rejection is provisional the reply must be complete. See MPEP § 804, subsection I.B.1. For a reply to a non-final Office action, see 37 CFR 1.111(a). For a reply to final Office action, see 37 CFR 1.113(c). A request for reconsideration while not provided for in 37 CFR 1.113(c) may be filed after final for consideration. See MPEP §§ 706.07(e) and 714.13.
The USPTO Internet website contains terminal disclaimer forms which may be used. Please visit www.uspto.gov/patent/patents-forms. The actual filing date of the application in which the form is filed determines what form (e.g., PTO/SB/25, PTO/SB/26, PTO/AIA /25, or PTO/AIA /26) should be used. A web-based eTerminal Disclaimer may be filled out completely online using web-screens. An eTerminal Disclaimer that meets all requirements is auto-processed and approved immediately upon submission. For more information about eTerminal Disclaimers, refer to www.uspto.gov/patents/apply/applying-online/eterminal-disclaimer.
Claims 1, 16, and 20 are rejected on the ground of nonstatutory double patenting as being unpatentable over claims 1, 18, and 21 of U.S. Patent No. 12314469. Although the claims at issue are not identical, they are not patentably distinct from each other because the current claims are fully taught by the parent case, as shown below:
Current Application 19189802
US-12314469
1. A method comprising: at an electronic device including a non-transitory memory, one or more processors, an extremity tracker, and a display:
1. A method comprising: at an electronic device including a non-transitory memory, one or more processors, an eye tracker, a spatial selector tracker, and a display:
displaying, on the display, a user interface that includes: a plurality of content regions, a plurality of affordances respectively associated with the plurality of content regions, and
displaying a content manipulation region on the display, wherein the content manipulation region comprises less than the entirety of the display;
a computer-generated representation of a trackpad associated with a physical surface;
...displaying, on the display, a computer-generated representation of a trackpad at a placement location on the physical surface…
while displaying the content manipulation region on the display:
determining a first gaze position within the content manipulation region based on eye tracking data from the eye tracker;
while displaying the user interface, detecting, via the extremity tracker, a first input directed to a first affordance of the plurality of affordances, wherein the first affordance is associated with a first content region of the plurality of content regions;
determining a selection point associated with a physical surface, based on spatial selector data from the spatial selector tracker; and
and in response to detecting the first input, associating the computer-generated representation of the trackpad with the first manipulation region.
displaying, on the display, a computer-generated representation of a trackpad at a placement location on the physical surface, wherein the placement location is based on the first gaze position and the selection point.
16. An electronic device comprising: one or more processors; a non-transitory memory; an extremity tracker; a display; and one or more programs, wherein the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
18. An electronic device comprising: one or more processors; a non-transitory memory; an eye tracker; a spatial selector tracker; a display; and one or more programs, wherein the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors, the one or more programs including instructions for:
displaying, on the display, a user interface that includes: a plurality of content regions, a plurality of affordances respectively associated with the plurality of content regions, and
displaying a content manipulation region on the display, wherein the content manipulation region comprises less than the entirety of the display;
a computer-generated representation of a trackpad associated with a physical surface;
...displaying, on the display, a computer-generated representation of a trackpad at a placement location on the physical surface…
while displaying the content manipulation region on the display:
determining a first gaze position within the content manipulation region based on eye tracking data from the eye tracker;
while displaying the user interface, detecting, via the extremity tracker, a first input directed to a first affordance of the plurality of affordances, wherein the first affordance is associated with a first content region of the plurality of content regions;
determining a selection point associated with a physical surface, based on spatial selector data from the spatial selector tracker; and
and in response to detecting the first input, associating the computer-generated representation of the trackpad with the first manipulation region.
displaying, on the display, a computer-generated representation of a trackpad at a placement location on the physical surface, wherein the placement location is based on the first gaze position and the selection point.
20. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which, when executed by an electronic device with one or more processors, an extremity tracker, and a display, cause the electronic device to:
21. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which, when executed by an electronic device with one or more processors, an eye tracker, a spatial selector tracker, and a display, cause the electronic device to:
displaying, on the display, a user interface that includes: a plurality of content regions, a plurality of affordances respectively associated with the plurality of content regions, and
displaying a content manipulation region on the display, wherein the content manipulation region comprises less than the entirety of the display;
a computer-generated representation of a trackpad associated with a physical surface;
...displaying, on the display, a computer-generated representation of a trackpad at a placement location on the physical surface…
while displaying the content manipulation region on the display:
determining a first gaze position within the content manipulation region based on eye tracking data from the eye tracker;
while displaying the user interface, detecting, via the extremity tracker, a first input directed to a first affordance of the plurality of affordances, wherein the first affordance is associated with a first content region of the plurality of content regions;
determining a selection point associated with a physical surface, based on spatial selector data from the spatial selector tracker; and
and in response to detecting the first input, associating the computer-generated representation of the trackpad with the first manipulation region.
displaying, on the display, a computer-generated representation of a trackpad at a placement location on the physical surface, wherein the placement location is based on the first gaze position and the selection point.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claim(s) 1-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Wilytsch, US-11054896, in view of Khalid, US-20100299436.
In regards to claim 1, Wilytsch discloses a method (Fig. 7; Col. 8, 25-27 "a method for generating a reference plane and presenting a virtual interaction object to a user wearing an HMD") comprising: at an electronic device (Fig. 1, 100 HMD) including a non-transitory memory (Col. 9, 55-60 “a non-transitory, tangible computer readable”), one or more processors (Col. 9, 55-60 “computer processor”), an extremity tracker (Fig. 4, 420 hand tracking module; Col. 5, 29-31 and Col. 3, 35-37 “hand tracking module 420 receives and analyzes image data from the cameras 120 to tracks [sic] hands 105, 110 of the user 101”, wherein “one or more cameras [are] integrated into the HMD 100”), and a display (Fig. 4, 405 display): displaying, on the display, a user interface (Fig. 3, 300 virtual reality view) that includes: a content region (Fig. 3, 320 virtual screen), a plurality of affordances respectively associated with the content region (Fig. 3, items displayed in 320 virtual screen including 335 message), and a computer-generated representation of a trackpad (Fig. 3, 330 virtual trackpad) associated with a physical surface (Fig. 3, 315 reference plane; Col. 4, 13-14 “In FIG. 3, the reference plane 315 is on the surface 115 of a real-world object (e.g., a table)”); while displaying the user interface, detecting, via the extremity tracker, a first input directed to a first affordance of the plurality of affordances (Fig. 3, items displayed in 320 virtual screen including 335 message; Col. 4, 54-64 “In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.”), wherein the first affordance is associated with a first content region (Col. 4, 54-64 “In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.”); and in response to detecting the first input, associating the computer-generated representation of the trackpad with the first content region (Fig. 3, 330 virtual trackpad; Col. 4, 54-64 “In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.”).
Wilytsch does not disclose expressly a plurality of content regions; the first affordance is associated with a first content region of the plurality of content regions.
Khalid discloses a display screen (Fig. 2C, screen of 202 external display device) comprising: a plurality of content regions (Fig. 2C, 204a-n resources, i.e. content regions); a first affordance is associated with a first content region of the plurality of content regions (Fig. 2C, each 204a-n resources, i.e. content regions, contains affordances, i.e. graphical, textual, etc… information).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 16, Wilytsch discloses an electronic device (Fig. 1, 100 HMD) comprising: one or more processors (Col. 9, 55-60 “computer processor”); a non-transitory memory (Col. 9, 55-60 “a non-transitory, tangible computer readable”); a extremity tracker (Fig. 4, 420 hand tracking module; Col. 5, 29-31 and Col. 3, 35-37 “hand tracking module 420 receives and analyzes image data from the cameras 120 to tracks [sic] hands 105, 110 of the user 101”, wherein “one or more cameras [are] integrated into the HMD 100”); a display (Fig. 4, 405 display); and one or more programs (Col. 9, 32-40 programs), wherein the one or more programs (Col. 9, 32-40 programs) are stored in the non-transitory memory (Col. 9, 55-60 “a non-transitory, tangible computer readable”) and configured to be executed by the one or more processors (Col. 9, 55-60 “computer processor”), the one or more programs (Col. 9, 32-40 programs) including instructions for:: displaying, on the display, a user interface (Fig. 3, 300 virtual reality view) that includes: a content region (Fig. 3, 320 virtual screen), a plurality of affordances respectively associated with the content region (Fig. 3, items displayed in 320 virtual screen including 335 message), and a computer-generated representation of a trackpad (Fig. 3, 330 virtual trackpad) associated with a physical surface (Fig. 3, 315 reference plane; Col. 4, 13-14 “In FIG. 3, the reference plane 315 is on the surface 115 of a real-world object (e.g., a table)”); while displaying the user interface, detecting, via the extremity tracker, a first input directed to a first affordance of the plurality of affordances (Fig. 3, items displayed in 320 virtual screen including 335 message; Col. 4, 54-64 “In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.”), wherein the first affordance is associated with a first content region (Col. 4, 54-64 “In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.”); and in response to detecting the first input, associating the computer-generated representation of the trackpad with the first content region (Fig. 3, 330 virtual trackpad; Col. 4, 54-64 “In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.”).
Wilytsch does not disclose expressly a plurality of content regions; the first affordance is associated with a first content region of the plurality of content regions.
Khalid discloses a display screen (Fig. 2C, screen of 202 external display device) comprising: a plurality of content regions (Fig. 2C, 204a-n resources, i.e. content regions); a first affordance is associated with a first content region of the plurality of content regions (Fig. 2C, each 204a-n resources, i.e. content regions, contains affordances, i.e. graphical, textual, etc… information).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 20, Wilytsch discloses a non-transitory computer readable storage medium (Col. 9, 46-47 “non-transitory, tangible computer readable storage medium”) storing one or more programs (Col. 9, 32-40 programs), the one or more programs (Col. 9, 32-40 programs) comprising instructions, which, when executed by an electronic device (Fig. 1, 100 HMD) with one or more processors (Col. 9, 55-60 “computer processor”), a extremity tracker (Fig. 4, 420 hand tracking module; Col. 5, 29-31 and Col. 3, 35-37 “hand tracking module 420 receives and analyzes image data from the cameras 120 to tracks [sic] hands 105, 110 of the user 101”, wherein “one or more cameras [are] integrated into the HMD 100”), and a display (Fig. 4, 405 display), cause the electronic device to: display, on the display, a user interface (Fig. 3, 300 virtual reality view) that includes: a content region (Fig. 3, 320 virtual screen), a plurality of affordances respectively associated with the content region (Fig. 3, items displayed in 320 virtual screen including 335 message), and a computer-generated representation of a trackpad (Fig. 3, 330 virtual trackpad) associated with a physical surface (Fig. 3, 315 reference plane; Col. 4, 13-14 “In FIG. 3, the reference plane 315 is on the surface 115 of a real-world object (e.g., a table)”); while displaying the user interface, detecting, via the extremity tracker, a first input directed to a first affordance of the plurality of affordances (Fig. 3, items displayed in 320 virtual screen including 335 message; Col. 4, 54-64 “In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.”), wherein the first affordance is associated with a first content region (Col. 4, 54-64 “In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.”); and in response to detecting the first input, associating the computer-generated representation of the trackpad with the first content region (Fig. 3, 330 virtual trackpad; Col. 4, 54-64 “In the case of FIG. 3, interactions with the virtual keyboard 325 and virtual trackpad 330 can result in virtual interaction images being displayed on the virtual screen 320. For example, the user 101 can move a cursor across the virtual screen 320 by dragging a finger across a portion of the reference plane corresponding to the trackpad 330. In another example, the message 335 is displayed responsive to the user 101 interacting with the virtual keyboard 325.”).
Wilytsch does not disclose expressly a plurality of content regions; the first affordance is associated with a first content region of the plurality of content regions.
Khalid discloses a display screen (Fig. 2C, screen of 202 external display device) comprising: a plurality of content regions (Fig. 2C, 204a-n resources, i.e. content regions); a first affordance is associated with a first content region of the plurality of content regions (Fig. 2C, each 204a-n resources, i.e. content regions, contains affordances, i.e. graphical, textual, etc… information).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 2, Khalid further discloses while the computer-generated representation of the trackpad is associated with the first content region: detecting, via the extremity tracker, a second input directed to within the computer-generated representation of the trackpad (Khalid Par. 0280 “drag graphical user interface elements from one portion of the display to another” as a second input; Khalid Fig. 2C show scroll bars which can be interacted with as a second input); and in response to receiving the second input, performing a first content manipulation operation with respect to the first content region according to the second input (Khalid Par. 0280 “drag graphical user interface elements from one portion of the display to another” as a second input; Khalid Fig. 2C show scroll bars which can be interacted with as a second input).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 3, Khalid further discloses the first content manipulation operation includes a drawing operation within the first content region (Khalid Par. 0280 “drag graphical user interface elements from one portion of the display to another”, as a second input; dragging graphical user interface is moving a finger on the touchpad i.e. drawing).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 4, Khalid further discloses the first content manipulation operation includes a navigational operation with respect to content within the first content region (Khalid Fig. 2C show scroll bars which can be interacted with as a second input).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 5, Khalid further discloses the first content region includes an object, and wherein the first content manipulation operation including pasting the object to a second content region of the plurality of content regions (Khalid Par. 0280 “drag graphical user interface elements from one portion of the display to another” as a second input, which is a pasting from one resource, i.e. content region, to another resource, i.e. content region).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 18, Khalid further discloses the first content region includes an object, and wherein the first content manipulation operation including pasting the object to a second content region of the plurality of content regions (Khalid Par. 0280 “drag graphical user interface elements from one portion of the display to another” as a second input, which is a pasting from one resource, i.e. content region, to another resource, i.e. content region).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 6, Khalid further discloses the second input corresponds to a drag and drop movement that begins at a location within the computer-generated representation of the trackpad that maps to the object within the first content region, wherein the drag and drop movement terminates within a second affordance of the plurality of affordances, and wherein the second affordance is associated with the second content region (Khalid Par. 0280 “drag graphical user interface elements from one portion of the display to another” as a second input from one resource, i.e. content region, to another resource, i.e. content region, which has another affordance, i.e. background and other GUI elements).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 19, Khalid further discloses the second input corresponds to a drag and drop movement that begins at a location within the computer-generated representation of the trackpad that maps to the object within the first content region, wherein the drag and drop movement terminates within a second affordance of the plurality of affordances, and wherein the second affordance is associated with the second content region (Khalid Par. 0280 “drag graphical user interface elements from one portion of the display to another” as a second input from one resource, i.e. content region, to another resource, i.e. content region, which has another affordance, i.e. background and other GUI elements).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 7, Khalid further discloses while detecting the second input, displaying a representation of the object that follows the drag and drop movement (Khalid Par. 0280 “drag graphical user interface elements from one portion of the display to another” as a second input from one resource, i.e. content region, to another resource, i.e. content region, which has another affordance, i.e. background and other GUI elements).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 8, Khalid further discloses at least a portion of the plurality of affordances are associated with a common application (Khalid Fig. 2C, 204d powerpoint resource with affordances, i.e. background and other GUI elements).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 9, Khalid further discloses at least a portion of the plurality of affordances are associated with distinct applications (Khalid Fig. 2C, 204b affordances, i.e. background and other GUI elements, that are associated with different applications).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 10, Khalid further discloses a particular affordance of the plurality of affordances includes an application icon that indicates a corresponding application (Khalid Fig. 2C, 204b affordances, i.e. background and other GUI elements, that are associated with different applications).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 11, Khalid further discloses each of a portion of the plurality of affordances includes a representation of content displayed within a corresponding one of the plurality of content regions (Khalid Fig. 2C, 204b affordances, i.e. GUI elements, that are associated with different applications is a representation of the application which can be opened as a content region).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 12, Wilytsch discloses the plurality of affordances (Fig. 3, items displayed in 320 virtual screen including 335 message) is proximate to the computer-generated representation of the trackpad (Fig. 3, 330 virtual trackpad).
In regards to claim 13, Wilytsch discloses the plurality of affordances are associated with the physical surface (Fig. 3, 330 virtual trackpad is on the table, and is used to interact with items displayed in 320 virtual screen including 335 message, i.e. affordances).
In regards to claim 14, Khalid further discloses before detecting the first input, the computer-generated representation of the trackpad is associated with a second content region, the method further comprising, in response to detecting the first input, dissociating the computer-generated representation of the trackpad with the second content region (Khalid Par. 0263 “a virtual input device 1402 includes a virtual track pad. In still another embodiments, a virtual input device 1402 includes a virtual pointing device, such as a cursor which may be manipulated by interacting with the virtual input device 1402.”; Khalid Par. 0279 “output data may include one or more graphical user interface elements, such as a cursor”; Khalid Par. 0280 “users may access a pointing device (such as a mouse) to manipulate an image of a cursor on a screen in order to interact with a graphical user interface element”; a location of a cursor in regards to the resources indicates the resource, i.e. content region, currently associated with the virtual trackpad).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
In regards to claim 15, Khalid further discloses in further response to receiving the first input, displaying, on the display, a selection indicator that is indicative of the association of the computer-generated representation of the trackpad with the first content region (Khalid Par. 0263 “a virtual input device 1402 includes a virtual track pad. In still another embodiments, a virtual input device 1402 includes a virtual pointing device, such as a cursor which may be manipulated by interacting with the virtual input device 1402.”; Khalid Par. 0279 “output data may include one or more graphical user interface elements, such as a cursor”; Khalid Par. 0280 “users may access a pointing device (such as a mouse) to manipulate an image of a cursor on a screen in order to interact with a graphical user interface element”; a location of a cursor in regards to the resources indicates the resource, i.e. content region, currently associated with the virtual trackpad; the cursor is the selection indicator).
Before the effective filing date of the claimed amendment, it would have been obvious to one of ordinary skill in the art that the virtual screen of Wilytsch can include a plurality of resources, i.e. content regions, as Khalid discloses. The motivation for doing so would have been to provide different resources/content to the user for interaction.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to CORY A ALMEIDA whose telephone number is (571)270-3143. The examiner can normally be reached M-Th 9AM-730PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Nitin (Kumar) Patel can be reached at (571) 272-7677. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/CORY A ALMEIDA/ Primary Examiner, Art Unit 2628 12/29/25