Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Use of indicates a limitation is not explicitly disclosed by the reference alone.
Claim(s) 1-5, 8, 10-20 is/are rejected under 35 U.S.C. 103 as being unpatentable over Santosa (US 2024/0077986) in view of Nuernberger (US 2017/0287218)
Claim 1
Examiner’s Interpretation:
Interpretation of Interactable Edge:
Applicant’s specification gives examples of interactable edges:
PNG
media_image1.png
391
326
media_image1.png
Greyscale
“As shown in illustration (a-1) of FIG. 8, a user controls a smart speaker 700 using edges of a digital scale 710 at his or her desk. The user maps the three interactable edge segments (i.e., ‘left’ discrete interactable edge, ‘bottom’ continuous interactable edge, ‘right’ continuous interactable edge) of the digital scale with functions of a virtual audio player 720 associated with the physical smart speaker 700. Particularly, the ‘left’ discrete interactable edge of the digital scale 710 is connected to a ‘pause/play’ action of the virtual audio player 720, the ‘bottom’ continuous interactable edge of the digital scale is connected to ‘next/previous’ action of the virtual audio player 720, and the ‘right’ continuous interactable edge of the digital scale is connected to a ‘volume’ action of the virtual audio player 720. As shown in illustration (a-2) of FIG. 8, the user can touch the scale’s ‘left’ interactable edge to start playing or pause music on the smart speaker 700. The user can swipe right on the ‘bottom’ interactable edge to change to the next track. Finally, the user can swipe down along the ‘right’ interactable edge to lower the volume of the music. Illustration (b-1) of FIG. 8 shows a similar example in which a user controls a color of a smart light bulb of a lamp 730 using the continuous interactable edge on a base of the lamp 730. As shown in illustration (b-2) of FIG. 8, the user can change the color of the light bulb by moving his or her finger from the left of the interactable edge to the middle, and to the right.”
(Specification, Fig. 1A, ¶ 86).
The claim term corresponds to portions of edges related to an object.
Scope of “Affordance” in Santosa’s Disclosure:
Santosa refers to the term affordance, which includes interactable edges (see Fig. 6):
PNG
media_image2.png
316
512
media_image2.png
Greyscale
Since the affordance covers edges in the same context as Applicant’s usage of interactable edges, affordance falls within the same claim scope.
Claim Mapping:
Santosa discloses a method for authoring an application incorporating a tangible user interface (Santosa, ¶ 2: “Opportunistic Tangible User Interfaces (TUI)”), the method comprising:
defining, with a processor, an interactable edge of a physical object in an environment based on a physical interaction of a user with the physical object (Santosa, ¶ 129: “The system identifies the coffee cup 815 as an ATUI within the extended reality environment by mapping (overlaying) an indicator 820 onto the coffee cup 815. In this particular example, the indicator 820 takes the form of a small circle that is overlaid onto the lid 825 of the coffee cup 815”);
associating, with the processor, based on user inputs received from the user, the interactable edge of the physical object with an action to be performed in response to the user touching the interactable edge of the physical object (Santosa, ¶ 130: “Once the coffee cup 815 has been identified as an ATUI by the system, the user can activate the coffee cup 815, such as for example, by using a finger 835 to tap or double tap on the indicator 820”); and
causing, with the processor, the action to be performed in response to detecting the user touching the interactable edge of the physical object (Santosa, ¶ 131: “] Upon activation of the coffee cup 815 as a useable ATUI, the system automatically overlays appropriate UIs 840, 845 on the coffee cup 815, and may also overlay one or more UIs 850 on a surface upon which the coffee cup 815 rests (or instead on the coffee cup 815 itself, such as on the lid 825). The UIs can include an indicator that directs the user on how to manipulate the coffee cup 815 to change the size and or orientation of the virtual automobile 800”).
Santosa does not explicitly disclose associating the action with a particular edge. However, Nuernberger discloses association with a particular edge (¶ 59: “In some examples user 200 may desire to align a linear edge or planar surface of the holographic cube 270 with an edge or surface of a physical object in room 210, such as the table 230. With reference now to FIGS. 4-7, in one example the user 200 may desire to align a bottom linear edge 278 of cube 270 with the top linear edge 280 of the table 230. As shown in FIG. 4, initially the user 200 may manipulate the cube 270 by, for example, selecting the cube for user interaction and moving the cube upward in the direction of arrow 400. In other examples a user may manipulate a virtual object by changing its orientation, modifying its shape, or otherwise interacting with the virtual object.”)
Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to associate particular edge with UI actions.
As suggested by Santosa (“Within the geometric affordance factor category may reside, for example and without limitation, object affordances associated with the surface of the physical object (e.g., the surface size or curvature), or an edge of the physical object (e.g., the edge length or curvature). Within the kinetic affordance factor category may reside, for example and without limitation, object affordances associated with a movable structure of the physical object (which may be a discrete or continuous input)), feedback produced by the physical object (e.g., tactile, auditory), or the elasticity of the physical object (e.g., elastic, bendable, rigid).Within the ergonomic affordance factor category may reside, for example and without limitation, object affordances associated with a user's interaction with the physical object (e.g., how grabbable or graspable is the object).”(Santosa, ¶ 103), and one of ordinary skill in the art would have motivation to precisely anchor objects to a particular location. One of ordinary skill in the art would have had a reasonable expectation of success because both references perform UI operations on known locations of physical objects.
Claim 2
Santosa does not disclose, but Nuernberger discloses detecting, with the processor, a plurality of edges of the physical object in the environment (Nuernberger, ¶ 42: “For example, candidate anchor features in the form of selected 3D linear edges and planar surfaces of physical objects in room 210”), the interactable edge being at least part of one of the plurality of edges (e.g. edges of a table); and tracking, with the processor, the plurality of edges over time by matching edges of the physical object detected over time with previously detected edges of the physical object (Nuernberger, ¶ 47: “In some examples, the foregoing process for extracting candidate anchor features 74 in the form of three dimensional line segments may be performed at 15 Hz, 30 Hz, 60 Hz, or other suitable frequency.”).
Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to associate particular edge with UI actions.
As suggested by Santosa (“Within the geometric affordance factor category may reside, for example and without limitation, object affordances associated with the surface of the physical object (e.g., the surface size or curvature), or an edge of the physical object (e.g., the edge length or curvature). Within the kinetic affordance factor category may reside, for example and without limitation, object affordances associated with a movable structure of the physical object (which may be a discrete or continuous input)), feedback produced by the physical object (e.g., tactile, auditory), or the elasticity of the physical object (e.g., elastic, bendable, rigid).Within the ergonomic affordance factor category may reside, for example and without limitation, object affordances associated with a user's interaction with the physical object (e.g., how grabbable or graspable is the object).”(Santosa, ¶ 103), and one of ordinary skill in the art would have motivation to precisely anchor objects to a particular location. One of ordinary skill in the art would have had a reasonable expectation of success because both references perform UI operations on known locations of physical objects.
Claim 3
Santosa discloses does not disclose, but Nuernberger discloses the detecting the plurality of edges of the physical object further comprising: detecting, with the processor, a plurality of edges of the environment including the physical object; and identifying, with the processor,using an object detection technique, the plurality of edges of the object as a subset of the plurality of edges of the environment (Nuernberger, ¶ 42: “For example, candidate anchor features in the form of selected 3D linear edges and planar surfaces of physical objects in room 210”; Santosa, ¶ 103: “object affordances associated with the surface of the physical object (e.g., the surface size or curvature), or an edge of the physical object (e.g., the edge length or curvature)”)
Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to associate particular edge with UI actions.
As suggested by Santosa (“Within the geometric affordance factor category may reside, for example and without limitation, object affordances associated with the surface of the physical object (e.g., the surface size or curvature), or an edge of the physical object (e.g., the edge length or curvature). Within the kinetic affordance factor category may reside, for example and without limitation, object affordances associated with a movable structure of the physical object (which may be a discrete or continuous input)), feedback produced by the physical object (e.g., tactile, auditory), or the elasticity of the physical object (e.g., elastic, bendable, rigid).Within the ergonomic affordance factor category may reside, for example and without limitation, object affordances associated with a user's interaction with the physical object (e.g., how grabbable or graspable is the object).”(Santosa, ¶ 103), and one of ordinary skill in the art would have motivation to precisely anchor objects to a particular location. One of ordinary skill in the art would have had a reasonable expectation of success because both references perform UI operations on known locations of physical objects.
Claim 4
Santosa discloses does not disclose, but Nuernberger discloses the tracking the plurality of edges over time further comprising: tracking, with the processor, a pose of the object over time; and matching edges of the physical object detected over time with previously detected edges of the physical object, based on changes in the pose of the object over time (Nuernberger, ¶ 40: “In other examples, physical features of a physical object may comprise one or more of a volume, orientation, and size of an object.”)
Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to associate particular edge with tracked edges.
As suggested by Santosa (“Within the geometric affordance factor category may reside, for example and without limitation, object affordances associated with the surface of the physical object (e.g., the surface size or curvature), or an edge of the physical object (e.g., the edge length or curvature). Within the kinetic affordance factor category may reside, for example and without limitation, object affordances associated with a movable structure of the physical object (which may be a discrete or continuous input)), feedback produced by the physical object (e.g., tactile, auditory), or the elasticity of the physical object (e.g., elastic, bendable, rigid).Within the ergonomic affordance factor category may reside, for example and without limitation, object affordances associated with a user's interaction with the physical object (e.g., how grabbable or graspable is the object).”(Santosa, ¶ 103), and one of ordinary skill in the art would have motivation to precisely anchor objects to a particular location. One of ordinary skill in the art would have had a reasonable expectation of success because both references perform UI operations on known locations of physical objects.
Claim 5
Santosa discloses the defining the interactable edge of the physical object further comprising: detecting, with at least one sensor, a user touching an edge of the physical object; and defining, with the processor, at least part of the edge to be the interactable edge in response to the user touching the edge ((Santosa, ¶ 129: “The system identifies the coffee cup 815 as an ATUI within the extended reality environment by mapping (overlaying) an indicator 820 onto the coffee cup 815. In this particular example, the indicator 820 takes the form of a small circle that is overlaid onto the lid 825 of the coffee cup 815”);
Claim 8
Santosa discloses the associating the interactable edge with the action further comprising: displaying, on a display of an augmented reality device, an augmented reality graphical user interface; and selecting, with the processor, the action to be performed based on user inputs received from the user via the augmented reality graphical user interface (¶ 128, 131: “The user is also presented with two virtual controllers 805, 810, where the controller 805 on the left is provided for transforming the view (e.g., changing the magnification and/or orientation) of the virtual automobile, and the controller 810…may also overlay one or more UIs 850 on a surface upon which the coffee cup 815 rests (or instead on the coffee cup 815 itself, such as on the lid 825). The UIs can include an indicator that directs the user on how to manipulat”)
Claim 10
Santosa discloses the causing the action to be performed further comprising: detecting, with the processor, whether the user has touched the interactable edge of the physical object; and causing, with the processor, a discrete state change of at least one of (i) a virtual object displayed in an augmented reality graphical user interface of an augmented reality device and (ii) a controllable physical device in the environment, the discrete state change being the action associated with the interactable edge (e.g. discrete states corresponding to temperature; ¶ 145: “The user can adjust water temperature, coffee grind, or steamed milk temperature parameters by sliding a finger along appropriate ones of the virtual slider selectors 965, 970, 975 overlaid on the coffee machine 945.”)
Claim 11
Santosa discloses the causing the action to be performed further comprising: detecting, with the processor, whether the user has touched the interactable edge of the physical object; and causing, with the processor, a state change over time of at least one of (i) a virtual object displayed in an augmented reality graphical user interface of an augmented reality device and (ii) a controllable physical device in the environment, the change over time being the action associated with the interactable edge (¶ 145: “The user can adjust water temperature, coffee grind, or steamed milk temperature parameters by sliding a finger along appropriate ones of the virtual slider selectors 965, 970, 975 overlaid on the coffee machine 945.”)
Claim 12
Santosa discloses the causing the action to be performed further comprising: detecting, with the processor, a location on the interactable edge of the physical object that is touched by the user; and causing, with the processor, a discrete state change of at least one of (i) a virtual object displayed in an augmented reality graphical user interface of an augmented reality device and (ii) a controllable physical device in the environment, the discrete state change being the action associated with the interactable edge, the discrete state change depending on the location on the interactable edge of the physical object that is touched by the user (e.g. Fig. 13B; discrete states corresponding to temperatures; ¶ 145:
PNG
media_image3.png
217
298
media_image3.png
Greyscale
“As represented in scene F of FIG. 13B, subsequent to the co-worker 940 making a coffee selection (not expressly shown), the system may generate several new virtual UIs. In this particular example, the system overlays a virtual water temperature slider selector 965, a virtual coffee grind slider selector 970, and a virtual dose (steamed milk temperature) slider selector 975 on the coffee machine 945. Each of the virtual water temperature slider selector 965, the virtual coffee grind slider selector 970, and the virtual dose slider selector 975, also includes a virtual floating gauge or indicator that can be used to display respective water temperature, coffee grind, and steamed milk temperature values.”)
Claim 13
Santosa discloses the causing the action to be performed further comprising:
detecting, with the processor, a location over time on the interactable edge of the physical object that is touched by the user (e.g. slider interface Fig. 13B); and
PNG
media_image3.png
217
298
media_image3.png
Greyscale
causing, with the processor, a state change over time of at least one of (i) a virtual object displayed in an augmented reality graphical user interface of an augmented reality device and (ii) a controllable physical device in the environment, the state change over time being the action associated with the interactable edge, the state change over time depending on the location over time on the interactable edge of the physical object that is touched by the user (Fig. 13B; ¶ 145:
“As represented in scene F of FIG. 13B, subsequent to the co-worker 940 making a coffee selection (not expressly shown), the system may generate several new virtual UIs. In this particular example, the system overlays a virtual water temperature slider selector 965, a virtual coffee grind slider selector 970, and a virtual dose (steamed milk temperature) slider selector 975 on the coffee machine 945. Each of the virtual water temperature slider selector 965, the virtual coffee grind slider selector 970, and the virtual dose slider selector 975, also includes a virtual floating gauge or indicator that can be used to display respective water temperature, coffee grind, and steamed milk temperature values. The user can adjust water temperature, coffee grind, or steamed milk temperature parameters by sliding a finger along appropriate ones of the virtual slider selectors 965, 970, 975 overlaid on the coffee machine 945. In scene F, the co-worker 940 is shown to be adjusting the steamed milk temperature by sliding a finger along the virtual dose slider selector 975 overlaid on the coffee machine 945. In this example, this also results in temporary enlargement/magnification of the virtual gauge portion of the virtual dose slider selector 975 to better reveal the adjusted milk steaming temperature.”).
Claim 14
Santosa discloses wherein the action includes one of (i) a change of status and (ii) a change over time of a virtual object displayed in an augmented reality graphical user interface of an augmented reality device, the causing the action to be performed further comprising: displaying, in the augmented reality graphical user interface, the one of (i) the discrete state change and (ii) the state change over time of the virtual object (e.g. changing position of virtual object over time; change of on/off state; Fig. 5; ¶ 100:
PNG
media_image4.png
328
599
media_image4.png
Greyscale
“ One example of a 2D input is moving multiple fingers on a surface (e.g., a desk or table surface) to move a pointer or to control the position of a virtual object in a 2D space (e.g., left/right and up/down). A 3D input can be used to control three values in three different dimensions. One example of a 3D input is shown to include pushing a mug forward to correspondingly move a virtual cube forward in a CAD application. Other examples of 3D inputs include, without limitation, moving a physical object in the air or along a surface (e.g., a desk or table surface) to control the position of a virtual object in a 3D space (e.g., left/right, up/down, and forward/backward) or rotating the physical object to control the orientation of the virtual object in a 3D space”)
Claim 15
Santosa discloses where in the discrete state change of the virtual object is an immediate change in at least one of a position, a color, a shape, and a pose of the virtual object (¶ 99: “one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D) gestures (inputs) via user-manipulated physical objects”)
Claim 16
Santosa discloses wherein the state change over time of the virtual object is an animation of the virtual object over time (¶ 58: “The client system 200 may also, for instance, animate a press of the virtual user interface button along with the button press gesture.”).
Claim 17
Santosa discloses wherein the action includes one of (i) a change of state and (ii) a change over time of a controllable physical device in the environment, the causing the at least one action to be performed further comprising: transmitting, with a transceiver, a command message configured to cause the controllable physical device to perform the one of (i) the discrete state change and (ii) the state change over time (e.g. On/Off State; Volume State; ¶ 100: “One example of a 0D input is shown to include tapping on a surface (e.g., a table surface) to press a virtual button that toggles an input between on and offstate or simply expresses confirmation of an action”)
PNG
media_image5.png
323
617
media_image5.png
Greyscale
Claim 18
Santosa discloses wherein the discrete state change of the controllable physical device is an immediate change in at least one of a power state and an operating mode of the controllable physical device (e.g. On/Off State; Position State; ¶ 100: “One example of a 0D input is shown to include tapping on a surface (e.g., a table surface) to press a virtual button that toggles an input between on and offstate or simply expresses confirmation of an action”)
Claim 19
Santosa discloses wherein the state change over time is a motion of the controllable physical device over time (¶ 100: “One example of a 2D input is moving multiple fingers on a surface (e.g., a desk or table surface) to move a pointer or to control the position of a virtual object in a 2D space (e.g., left/right and up/down)”)
Claim 20
Santosa does not disclose, but Nuernberger discloses the associating the interactable edge with the action further comprising: displaying, on a display of an augmented reality device, an augmented reality graphical user interface, the augmented reality graphical user interface including highlighting superimposed upon the interactable edge of the physical object (¶ 73: “In this manner, the user 200 is provided with a visual indication that highlights the top linear edge 280, thereby informing the user that alignment with this physical edge is available.”)
Before the effective filing date of this application, it would have been obvious to one of ordinary skill in the art to highlight the edges.
As suggested by Santosa (“Within the geometric affordance factor category may reside, for example and without limitation, object affordances associated with the surface of the physical object (e.g., the surface size or curvature), or an edge of the physical object (e.g., the edge length or curvature). Within the kinetic affordance factor category may reside, for example and without limitation, object affordances associated with a movable structure of the physical object (which may be a discrete or continuous input)), feedback produced by the physical object (e.g., tactile, auditory), or the elasticity of the physical object (e.g., elastic, bendable, rigid).Within the ergonomic affordance factor category may reside, for example and without limitation, object affordances associated with a user's interaction with the physical object (e.g., how grabbable or graspable is the object).”(Santosa, ¶ 103), and one of ordinary skill in the art would have motivation to precisely anchor objects to a particular location. One of ordinary skill in the art would have had a reasonable expectation of success because both references perform UI operations on known locations of physical objects.
Allowable Subject Matter
Claim(s) 6, 7, 9 objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims.
The following is a statement of reasons for the indication of allowable subject matter:
Regarding claim(s) 6, the cited references do not teach or suggest mapping based on the claimed sliding.,
Regarding claim(s) 7, the cited references do not teach defining different interactions based on location.
Regarding claim(s) 9, the cited reference do not suggest recording the action but rather use pre-defined action tables.
Conclusion
Any inquiry concerning this communication or earlier communications from the examiner should be directed to RYAN M GRAY whose telephone number is (571)272-4582. The examiner can normally be reached on Monday through Friday, 9:00am-5:30pm (EST).
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Kee Tung can be reached on (571)272-7794. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see https://ppair-my.uspto.gov/pair/PrivatePair. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/RYAN M GRAY/Primary Examiner, Art Unit 2611