DETAILED ACTIONNotice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claim(s) 24, 42 and 43 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claim(s) 24-26, 30, 31 and 42-45 is/are rejected under 35 U.S.C. 103 as being unpatentable over Krenn, VR-OS: Interface Prototype. Full Demo. YouTube 20 November 2013 <URL: https://www.youtube.com/watch?v=XMwqiAeQDUc> in view of Uhlig, Multi Window Drag&Drop without Popup Windows. YouTube 19 March 2021 <URL: https://www.youtube.com/watch?v=AQilsuxy59I>. In regards to claims 24, 42 and 43 Krenn teaches a method comprising: at a device including a display, one or more processors, and non-transitory memory (See; Description where the demo is to be run on a head mounted display in VR where it is inherent that a head mounted display utilizes one or more processors and memory for executing applications on the display): displaying, in a first area of an extended reality (XR) environment, a first content pane including first content including a link to second content; while displaying the first content pane in the first area, receiving a first user input selecting the first content pane and indicating a second area of the XR environment separate from the first area and not displaying a content pane; in response to receiving the first user input, moving the first content pane from the first area to the second area (See; 3:28-3:48 where each window is individually movable from a first area to a second area in response to a user dragging and dropping the windows into unused areas): while displaying the first content pane in the second area, receiving a second user input selecting the link to the second content See; 18:20-18:30 where the user may click an icon to open additional tabs indicative of links into unused spaces in the VR space while maintaining display of the original browser window next to them). Krenn fails to explicitly teach and indicating a third area of the XR environment separate from the second area and not displaying a content pane. However, Uhlig shows receiving a second user input selecting the link to the second content and indicating a third area of the XR environment separate from the second area and not displaying a content pane and in response to receiving the second user input displaying, in the third area, a second content pane including the second content and maintaining display, in the second area, of the first content pane (See; 0:08 – 0:14 where the user drags a tab indicative of a link present on the #1 window into an unused space to the right of it to form its own window). Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to modify Krenn to open additional links in browser tabs in a drag and drop manner such as in Tobias so the user has more flexibility on where they want to place the new tabs. In regards to claim 25, Krenn teaches wherein the first content includes a webpage and the link to the second content includes a link to a second webpage (See; 18:20-18:30 where the user may click an icon to open additional tabs indicative of links into unused spaces in the VR space while maintaining display of the original browser window next to them). Uhlig also teaches wherein the first content includes a webpage and the link to the second content includes a link to a second webpage (See; 0:08 – 0:14 where the user drags a tab indicative of a link present on the #1 window into an unused space to the right of it to form its own window).
In regards to claim 26, Uhlig teaches wherein the second user input includes a first gesture performed at a location of the link to the second content and a second gesture at a location of the second area (See; 0:08 – 0:14 where the user drags a tab indicative of a link present on the #1 window into an unused space to the right of it to form its own window. A drag and drop includes a first gesture (grab via clicking at the location of the link) and a second gestures (drop and release the click at the second area)).
In regards to claim 30, Krenn teaches wherein an orientation of the second content pane is based on the second content (See; 18:20-18:30 where the user may click an icon to open additional tabs indicative of links into unused spaces in the VR space while maintaining display of the original browser window next to them. The additional tabs are oriented to be in browser windows oriented towards the user). Uhlig also teaches wherein an orientation of the second content pane is based on the second content (See; 0:08 – 0:14 where the user drags a tab indicative of a link present on the #1 window into an unused space to the right of it to form its own window. Where the new tab is oriented to be in a browser window oriented towards the user).
In regards to claim 31, Uhlig teaches wherein the first content or the second content includes a link to third content, further comprising: receiving a user input selecting the link to the third content and indicating the second area; and in response to receiving the user input selecting the link to the third content and indicating the second area, displaying, in the second area, a third content pane including the third content (See; 0:08 – 0:14 where the user drags a tab indicative of a link present on the #1 window into an unused space to the right of it to form its own window. Where it is obvious that the web browser can have more tabs open than just the two shown, where the user can click the plus icon to create a third browser and drag and drop the third browser with a link to third content to the second area). In regards to claim 44, Krenn teaches wherein displaying the first content pane includes: obtaining the first content including the link to the second content; and displaying the first content pane including the first content. (See; 18:20-18:30 where the user may click an icon to open additional tabs indicative of links into unused spaces in the VR space while maintaining display of the original browser window next to them. The first content pane includes the links to additional content in the form of tabs).
In regards to claim 45, Krenn teaches wherein moving the first content pane from the first area to the second area includes: displaying, in the second area, the first content pane; and ceasing to display, in the first area, the first content pane (See; 3:28-3:48 where each window is individually movable from a first area to a second area in response to a user dragging and dropping the windows into unused areas. Where the window is only present in one area).
Claim(s) 27, 28 and 46 is/are rejected under 35 U.S.C. 103 as being unpatentable over Krenn, VR-OS: Interface Prototype. Full Demo. YouTube 20 November 2013 <URL: https://www.youtube.com/watch?v=XMwqiAeQDUc> in view of Uhlig, Multi Window Drag&Drop without Popup Windows. YouTube 19 March 2021 <URL: https://www.youtube.com/watch?v=AQilsuxy59I> and further in view of Haddon (2015/0199005).
In regards to claim 27, Uhlig teaches wherein the user input selecting the link to the second content and indicating the second area includes a first gesture performed at least a threshold distance from any user interface element (See; 0:08 – 0:14 where the user drags a tab indicative of a link present on the #1 window into an unused space to the right of it to form its own window.). The combination of Krenn and Uhlig fails to explicitly teach while the user is looking at the link to the second content and a second gesture while the user is looking within the second area.
However, Haddon teaches defining a gesture function based on the location where a user is looking at (See; Figs. 3-5 and p[0100]-p[0104] for using a user’s gaze position to assist in moving symbols from a first position to a second position). Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to modify the combination of Krenn and Uhlig to use a gaze tracking system such as in Haddon so as to further assist the user in GUI control.
In regards to claim 28, Uhlig teaches wherein the user input selecting the link to the second content and indicating the second area includes a first gesture performed at least a threshold distance from any user interface element (See; 0:08 – 0:14 where the user drags a tab indicative of a link present on the #1 window into an unused space to the right of it to form its own window). The combination of Krenn and Uhlig fails to explicitly teach while the user is looking at the link to the second content and a second gesture at a relative position from the first gesture, wherein the relative position causes a corresponding change in location relative to the link to the second content that falls within the second area.
However, Haddon teaches while the user is looking at the link to the second content and a second gesture at a relative position from the first gesture (See; Figs. 3-5 and p[0100]-p[0104] for using a user’s gaze position to assist in moving symbols from a first position to a second position), wherein the relative position causes a corresponding change in location relative to the link to the second content that falls within the second area (See; Figs. 3-5 and p[0100]-p[0104]) for using the change in location of the symbol is based on the relative position of the eye movement and the gesture). Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to modify the combination of Krenn and Uhlig to use a gaze tracking system such as in Haddon so as to further assist the user in GUI control. In regards to claim 46, Uhlig teaches wherein the user input selecting the link to the second content and indicating the second area includes a first gesture performed at a first location (See; 0:08 – 0:14 where the user drags a tab indicative of a link present on the #1 window into an unused space to the right of it to form its own window). The combination of Krenn and Uhlig fails to explicitly teach while the user is looking at the link to the second content and a second gesture at a second location a distance from the first location in a direction, further comprising determining the second area based on the distance and the direction. However, Haddon teaches while the user is looking at the link to the second content and a second gesture at a second location a distance from the first location in a direction (See; Figs. 3-5 and p[0100]-p[0104] for using a user’s gaze position to assist in moving symbols from a first position to a second position), further comprising determining the second area based on the distance and the direction (See; Figs. 3-5 and p[0100]-p[0104]) for using the change in location (change in distance and direction) of the symbol is based on the relative position of the eye movement and the gesture). Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to modify the combination of Krenn and Uhlig to use a gaze tracking system such as in Haddon so as to further assist the user in GUI control.
Claim(s) 29 is/are rejected under 35 U.S.C. 103 as being unpatentable over Krenn, VR-OS: Interface Prototype. Full Demo. YouTube. 20 November 2013 <URL: https://www.youtube.com/watch?v=XMwqiAeQDUc> in view of Uhlig, Multi Window Drag&Drop without Popup Windows. YouTube. 19 March 2021 <URL: https://www.youtube.com/watch?v=AQilsuxy59I> in view of Haddon (2015/0199005) and further in view of AWE XR, Meron Gribetz (CEO, Meta): Demo of Meta’s AR Workspace. YouTube. 23 June 2017 <URL: https://www.youtube.com/watch?v=hMicVgj0aNc>.
In regards to claim 29, the combination fails to explicitly teach wherein the first gesture is a pinch fling gesture and the second gesture is a release gesture. However, AWE XR teaches wherein the first gesture is a pinch fling gesture and the second gesture is a release gesture (See; 2:54-3:42 where the user uses in pinch gestures to gran a window and opens their hand to release the window where they want it placed in the AR space). Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to modify the combination to use in air pinch and release gestures to act the same as a mouse click and release to therefore allow the user to use in air hand gestures to remove the need for handheld input devices.
Claim(s) 32, 33, 35, 36 and 38-39 is/are rejected under 35 U.S.C. 103 as being unpatentable over Krenn, VR-OS: Interface Prototype. Full Demo. YouTube. 20 November 2013 <URL: https://www.youtube.com/watch?v=XMwqiAeQDUc> in view of Uhlig, Multi Window Drag&Drop without Popup Windows. YouTube. 19 March 2021 <URL: https://www.youtube.com/watch?v=AQilsuxy59I> and further in view of Salvatorc (2002/0118227).
In regards to claim 32, Krenn and Uhlig fails to explicitly teach wherein the second content pane is displaced in the depth direction from a first location to a second location and the third content pane is displayed at the first location. However, Salvatorc teaches wherein the second content pane is displaced in the depth direction from a first location to a second location and the third content pane is displayed at the first location (See; Figs. 2 and 4 where a user can select the position and dimensions of windows where the windows can be stacked in a depth direction such as windows 220, 230).
Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to modify Krenn and Uhlig to use depth perspective with windows such as in Salvatorc as it is a commonly done practice in GUIs so as to get more content on display then what would normally fit if the content didn’t overlap.
In regards to claim 33, Krenn and Uhlig fails to explicitly teach wherein the second content pane is displayed at a first location and the third content pane is displayed at a second location in front of the second content pane. However, Salvatorc teaches wherein the second content pane is displayed at a first location and the third content pane is displayed at a second location in front of the second content pane (See; Figs. 2 and 4 where a user can select the position and dimensions of windows where the windows can be stacked in a depth direction such as windows 220, 230).
Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to modify Krenn and Uhlig to use depth perspective with windows such as in Salvatorc as it is a commonly done practice in GUIs so as to get more content on display then what would normally fit if the content didn’t overlap..
In regards to claim 35, Krenn and Uhlig fails to explicitly teach wherein displaying the third content pane includes displaying the second content pane in a stack with the third content pane, each content plane in the stack displaced in a depth direction. However, Salvatorc teaches wherein displaying the third content pane includes displaying the second content pane in a stack with the third content pane, each content plane in the stack displaced in a depth direction (See; Figs. 2 and 4 where a user can select the position and dimensions of windows where the windows can be stacked in a depth direction such as windows 220, 230).
Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to modify Krenn and Uhlig to use depth perspective with windows such as in Salvatorc as it is a commonly done practice in GUIs so as to get more content on display then what would normally fit if the content didn’t overlap.
In regards to claim 36, Krenn and Uhlig fails to explicitly teach further comprising: receiving a stretch user input directed to the stack; and in response to receiving the stretch user input, displaying content panes of the stack in a stretched configuration, including displacing one or more of the content panes of the stack in a direction perpendicular to a depth dimension without displacing the one or more of the content panes of the stack in the depth direction.
However, Salvatorc teaches receiving a stretch user input directed to the stack; and in response to receiving the stretch user input, displaying content panes of the stack in a stretched configuration, including displacing one or more of the content panes of the stack in a direction perpendicular to a depth dimension without displacing the one or more of the content panes of the stack in the depth direction (See; Figs. 3B-3D and 4 where a user can select the dimensions of windows in the stack such as changing the size of window 301 in a direction perpendicular to the stack without changing the depth direction of the stack).
Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to modify Krenn and Uhlig to use depth perspective with windows such as in Salvatorc as it is a commonly done practice in GUIs so as to get more content on display then what would normally fit if the content didn’t overlap.
In regards to claim 38, Salvatorc teaches receiving an expand user input directed to the stack; and in response to receiving the expand user input, displaying content panes of the stack in an expanded configuration, including displacing one or more of the content panes of the stack in a depth direction (See; Figs. 3B-3D and 4 where a user can select the dimensions of windows in the stack such as changing the size of window 301. Since the window is fully customizable this could include displacing the window 301 in the depth direction from windows 230, 220).
In regards to claim 39, Salvatorc teaches in response to receiving the expand user input, displacing the one or more of the content panes of the stack in a direction perpendicular to the depth direction (See; Figs. 3B-3D and 4 where a user can select the dimensions of windows in the stack such as changing the size of window 301 in a direction perpendicular to the stack).
Claim(s) 37 is/are rejected under 35 U.S.C. 103 as being unpatentable over Krenn, VR-OS: Interface Prototype. Full Demo. YouTube. 20 November 2013 <URL: https://www.youtube.com/watch?v=XMwqiAeQDUc> in view of Uhlig, Multi Window Drag&Drop without Popup Windows. YouTube. 19 March 2021 <URL: https://www.youtube.com/watch?v=AQilsuxy59I> in view of Salvatorc (2002/0118227) and further in view of Haddon (2015/0199005).
In regards to claim 37, Salvatorc fails to explicitly teach wherein the stretch user input includes a user gazing at a top of the stack.
However, Haddon teaches defining a gesture function based on the location where a user is looking at (See; Figs. 3-5 and p[0100]-p[0104] for using a user’s gaze position to assist in moving symbols from a first position to a second position). Therefore it would have been obvious to one of ordinary skill in the art at the time of filing to modify Krenn, Uhlig and Salvatorc to use a gaze tracking system such as in Haddon so as to further assist the user in GUI control.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN A BOYD whose telephone number is (571)270-7503. The examiner can normally be reached Mon - Fri 8:00 - 5:00.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ke Xiao can be reached at (571) 272-7776. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/JONATHAN A BOYD/Primary Examiner, Art Unit 2627