DETAILED ACTION
This Office action is in response to the communication filed on January 21, 2026. Claims 1-22 remain pending in this application. The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Priority
Applicant’s claim for foreign priority under 35 U.S.C. 119 (a)-(d) based on application filed in Korea on April 7, 2021 has been acknowledged and considered by Examiner. However, it appears drawing support for certain embodiments such as Figs. 4-10 and 11-14 but not to the detail as is claimed, so for the remaining subject matter and more detailed subject matter, the effective filing date is April 4, 2022. Receipt is acknowledged of papers submitted under 35 U.S.C. 119(a)-(d) that are placed on record in the application file.
Response to Arguments
Applicant’s arguments with respect to amended claims 1 and 21-22 in the Remarks section (pages 10-12) have been fully considered but are moot because the arguments do not apply to the current combination of references being used in the current rejection.
U.S. Patent Publication 2018/0158250 A1 by Yamamoto in view of U.S. Patent Publication 2020/0333891 A1 by Poore et al. (“Poore”) address the limitations set forth in the amended claims as the new grounds for rejection for claim 1 and 21-22.
Applicant's arguments have been fully considered with respect to 2-20 in the Remarks section (pages 10-12) but they are not persuasive as the claims depend upon the features recited in the amended independent claims.
Claim Rejections - 35 USC § 112
The following is a quotation of 35 U.S.C. 112(b):
(b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention.
The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph:
The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention.
Claims 1-22 are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. The newly amended claims recite: “a --handheld electronic device configured to generate writing based on a detected physical writing motion on the handheld electronic device by a user using the handheld device, wherein the writing information is generated based on the touch on the handheld device of a pen or other writing implement controlled by the user.” However, later embodiments in the dependent claims, claim both hands are tracked and don’t require a handheld electronic device. It is unclear how the user can both hold a handheld electronic device and a pen or other writing implement and still track both hands in order to make specific gesture in the virtual input space. See for instance, claims 6-11 and Figs. 4-11. Claim 6 recites: “where the user performs, with both tracked hands, a motion of clenching a fist…when the user performs, with both tracked hands, a motion of adjusting the distance between the hands while maintaining the tracked hands fisted.” This would not be operable if the user was using a handheld device with a writing implement. The Applicant seems to be combining different embodiments and the public would not where the metes and bounds of the invention begin and end.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102 of this title, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-3, 6-8, 14-16, 18, and 20-22 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication 2018/0158250 A1 by Yamamoto in view of U.S. Patent Publication 2020/0333891 A1 by Poore.
Regarding claim 1, Yamamoto discloses an electronic system (Fig. 3) comprising:
an electronic device configured to generate information based on a detected writing motion of writing with respect to the electronic device (Figs. 5A-5B and 7A-7C; [0026] and [0021], In some implementations, the user may annotate the virtual notional surface 600 of an virtual environment using a marking implement such as, for example, one or both of the electronic controllers 200A, 200B, a hand/finger whose position and orientation is tracked by the system, or other designated marking implement whose position and orientation is tracked by the system); and
a display device configured to display a scene in the virtual space corresponding to a viewpoint of the user and provide the displayed scene to the user ([0020], FIG. 4A illustrates a virtual scene 500, or virtual environment 500, viewed by the user, for example, on the display 140 of the HMD 100),
wherein the display device is configured to receive the generated writing information describing the detected physical writing motion, the displayed scene including a reference object
([0020]-[0021], In a system and method, in accordance with implementations described herein, the user may cause a virtual notation surface (or, hereinafter, simply a virtual surface, for ease of discussion) to be materialized in the virtual environment 500 displayed by HMD and transmitted by electronic controllers. The user may then annotate the virtual surface with, for example, text, sketches and the like, without leaving the virtual environment 500), and
in response to the reference object being included in the scene, display handwriting generated based on the writing information on the reference object based on information transmitted from the electronic device (Figs. 5A-5B, [0020]-[0021], While experiencing the virtual environment 500 illustrated in the example shown in FIG. 4A, the user may find it useful to, for example, make a note, for example, in a text form, or a sketch, or other manner, based on, for example, an observation made while experiencing the virtual environment 500, as a reminder for later reference, to record information to be shared with other users, and the like)
wherein, in the virtual space, the reference object is arranged on a surface of the electronic device, and at least one of both bands of the user is tracked to be displayed in the virtual space to generate hand tracking information, the one or more objects present in the virtual space are controlled by the tracked hand of the user based on the generated hand tracking information (Figs. 5D-5E, For example, as shown in FIG. 5D, with the first controller 200A positioned at a left edge of the virtual surface 600 and the second controller 200B positioned at a right edge of the virtual surface 600, the user may actuate the manipulation device 205A of the first controller 200A and the manipulation device 205B of the second controller 200B, and move the left and right edges outward).
However, Yamamoto does not teach a handheld electronic device with detected physical writing motion of writing on the handheld electronic device by a user using the handheld electronic device, wherein the display device is configured, the displayed scene including a reference object whose location corresponds to the location of the handheld electronic device within the viewpoint of the user, based on information transmitted from the handheld electronic device, wherein the writing information is generated based on touch on the handheld electronic device of a pen or other writing implement controlled by the user,
While Yamamoto teaches in [0002], A user may interact with objects in this virtual environment using various electronic devices, such as, for example, a helmet or other head mounted device including a display, glasses or goggles that a user looks through when viewing a display device, one or more handheld electronic devices such as controllers, joysticks and the like, gloves fitted with sensors, keyboards, mouse, and other electronic devices, Yamamoto did not limit that the writing motion was detected on the handheld device, such as by a pen or writing implement.
The virtual annotation of Yamamoto can be superimposed on the handheld writing device.
However, in the analogous art of virtual content display and manipulation, Poore teaches configured the head-mounted display to present information in addition to (i.e., overlaid with) the physical environment viewed by the user. A user interacted with a stylud while wearing the head-mounted display, wherein the stylus was configured as a touch-based input device that provides input to the electronic system and external (portable, handheld, smart phone) device based on contact with a tip at a terminal end of the stylus (Poore Fig. 1, [0021]-[0022]). The interface surface 50 may include a touch-sensitive panel or a digitizer to detect input when contacted by the stylus 300 that overlaps with a display screen of the external device 90, and inputs from the stylus 300 can be used to modify information on the display screen of the external device 90 (e.g., to provide navigational, pointing inputs, writing inputs, etc.). Alternatively, or in combination, the electronic system 100 can be configured to use inactive surfaces or any surfaces of the environment as the interface surface 50, for example, by transducing movements of the stylus 300 using sensors in the head-mounted device 200 and/or stylus 300, and converting such inputs into a visible response presented on the display 220 of the head-mounted device (Poore Fig. 1; [0023]-[0024]). It would have been obvious before the effective filing date of the invention to have substituted the electronic controller handheld device to handwriting inputs for the stylus and portable device for handwriting inputs for virtual space gesture and annotations as taught by Yamamoto. One having ordinary skill in the art would have been motivated to have configured the head-mounted display to present information in addition to (i.e., overlaid with) the physical environment viewed by the user. The physical environment included commonly used electronic devices that would be desirable to provide the user with additional tools with which the user can observe or otherwise interact with the physical environment while wearing a head-mounted device (Poore Fig. 1, [0021]-[0022] and [0014]).
Regarding claim 2, Yamamoto of the combination of references further teaches the electronic system of claim 1, wherein each of the one or more objects present in the virtual space is controlled by the hand tracking information while the handwriting is being maintained (Figs. 5D-5G; [0028], The user may want to adjust a size and/or a shape and/or an orientation of the virtual surface 600, either before or after the virtual surface 600 has been annotated in the manner described above. The user may adjust a position of just one of the edges of the virtual surface 600 in a similar manner, by actuating and moving one of the controllers 200A, 200B from a first position at an edge of the virtual surface 600 to a second position, to make a corresponding adjustment in size/shape of the virtual surface 600. The user may terminate this adjustment in size and/or shape by, for example, releasing the manipulation devices 205A, 205B).
Regarding claim 3, Yamamoto of the combination of references further teaches the electronic system of claim 1, wherein, when the user performs, with the tracked hand, a motion of holding a target object among the one or more objects arranged in the virtual space, moving the target object, and then releasing the target object, the target object is moved in the virtual space according to the motion of moving and is arranged at a position in the virtual space corresponding to a position at which the motion of releasing is performed ([0039]-[0040], Upon detection that the virtual object 700A has been selected, the user may implement a gesture indicating a desired movement of the selected virtual object 700A. For example, the user implement a rotating or twisting gesture, as shown in FIG. 7H, to draw the selected virtual object 700A closer to the user, for example, from the far virtual field toward the near virtual field, so that the user may interact with the selected virtual object 700A after it was released from draw/pull gesture).
Regarding claim 6, Yamamoto of the combination of references further teaches the electronic system of claim 1, wherein, when the user performs, with both tracked hands, a motion of clenching a fist within a predetermined distance, or a motion of clenching a fist within the predetermined distance and then moving the hands away from each other, a plane of a size corresponding to a distance between the hands is generated in the virtual space, and when the user performs, with both tracked hands, a motion of adjusting the distance between the hands while maintaining the tracked hands fisted, the size of the plane generated in the virtual space is controlled according to the adjusted distance between the hands (Figs. 6A-6B; [0030], with the controllers clenched in a fist, moving the opposite bottom and top corners of the virtual surface 600 toward each other proportionally decreased a size and scale of the virtual surface 600 and the annotation received on the virtual surface 600, from the arrangement shown in FIG. 6A to the arrangement shown in FIG. 6B. An increase in size and/or scale of the virtual surface 600 and the annotation may be achieved by, for example, moving the opposite corners in a direction away from each other, for example, from the arrangement shown in FIG. 6B to the arrangement shown in FIG. 6A.)
Regarding claim 7, Yamamoto of the combination of references further teaches the electronic system of claim 1, wherein, when the user performs, with both tracked hands or one tracked hand, a motion of holding and moving a plane in the virtual space and then releasing the plane, the plane is moved in the virtual space according to the motion of moving and is then arranged at a position in the virtual space at which the motion of releasing is performed (Fig. 7A and 7B; [0033], As shown in FIG. 7B, the first user A (or the second user B) may select one of the virtual surfaces 600, for example, the virtual surface 600C, for review by, for example, selecting the virtual surface 600C using one, or both, of the controllers A1, A2. The virtual space was arranged in an operable area to add notes and annotations to in near virtual space).
Regarding claim 8, Yamamoto of the combination of references further teaches the electronic system of claim 1, wherein, when the user performs, with both tracked hands, a motion of holding a plane in the virtual space and reducing a distance between the hands to a predetermined distance or less,in the absence of an object attached to the plane, the plane is deleted from the virtual space ([0036], In some situations, virtual objects or features, such as, for example, the virtual surfaces 600 and/or objects to be added onto the virtual surfaces 600 may be outside of the virtual reach of the user in the virtual space 550. Similarly, in some situations, the user may wish to move virtual object or features, such as, for example, the virtual surfaces 600 and/or other virtual objects, from the relatively near field to a virtual position further from the user to, for example, clear the near field virtual space for other tasks. In some implementations, this movement of a virtual object, from a relatively far field position in the virtual space 500 to a relatively near field position in the virtual space 500, and/or from a relatively near field position to a relatively far field position in the virtual space 500, may be accomplished in response to detection of a gesture corresponding to a movement to be executed such as the gesture to minimize).
in the presence of an object attached to the plane, the plane is not reduced to less than a size of an edge of the object attached to the plane ([0037], In addition to, or instead of, directly marking on, or drawing on, or writing on, the virtual surface, the user may choose to annotate, or draw on, or mark, or otherwise add to the virtual surface 600 by, for example, selecting one or more of the virtual objects 700 to be added to, or included on, the virtual surface 600. In that way, the virtual space was cleared up to the edge of the objects).
Regarding claim 14 Yamamoto of the combination of references further teaches the system of claim 1, wherein, when the user performs, with the tracked hand, a motion of holding a target object among a plurality of objects arranged in the virtual space and moving the target object to be within a predetermined distance to another object to align the target object and the other object at a predetermined angle, and then releasing the target object, a plane to which the target object and the other object are to be attached is generated in the virtual space (Fig. 7E; [0034], second user B may also select the virtual surface 600E, including park information, as shown in FIG. 7D. The second user B may use the information stored on the virtual surface 600E to further annotate the virtual surface 600C, as shown in Fig. 7E. The further annotated virtual surface 600C may then be available for review by the first user A. The further annotation/park was moved next to 600C at an angle which aligned it with the 600c virtual surface).
Regarding claim 15, Yamamoto of the combination of references further teaches the electronic system of claim 1, wherein, when the user performs, with the tracked hand, a motion of holding a target object in the virtual space and moving the target object to be within a predetermined distance on a plane, a feed forward corresponding to the target object is displayed on the plane in the virtual space, and when the user performs a motion of releasing the target object, the target object is attached to a position of the feed forward displayed on the plane in the virtual space (Figs. 7A-7B; [0032]-[0033], Multiple virtual notation surfaces 600 (in particular, virtual surface 600A, 600B, 600C, 600D and 600E) may be available to the first user A and the second user B in the shared virtual environment 550, for access, review, revision, collaboration, and the like.As shown in FIG. 7B, the first user A (or the second user B) may select one of the virtual surfaces 600, for example, the virtual surface 600C, for review by, for example, selecting the virtual surface 600C using one, or both, of the controllers A1, A2. In reviewing/feedforwarding the selected virtual surface 600C, the first user A may, for example, enlarge the selected virtual surface 600C, for example, in the manner described above with respect to FIGS. 5D-5E and 6A-6B upon release of gesture after selection as in [0039])
Regarding claim 16, Yamamoto of the combination of references further teaches the electronic system of claim 1, wherein, when the user performs, with the tracked hand, a motion of moving a target object in a plane arranged in the virtual space on the plane, the target object is moved according to the motion performed by the user on the plane in the virtual space ([0039], Upon detection that the virtual object 700A has been selected, the user may implement a gesture indicating a desired movement of the selected virtual object 700A. For example, the user implement a rotating or twisting gesture, as shown in FIG. 7H, to draw the selected virtual object 700A closer to the user, for example, from the far virtual field toward the near virtual field, so that the user may interact with the selected virtual object 700A).
Regarding claim 18, Yamamoto of the combination of references further teaches the electronic system of claim 1, wherein, when the user performs, with the tracked hand, a motion of holding a first plane in the virtual space and allowing the first plane to penetrate through a second plane to which one or more objects are attached in a predetermined direction, an object in an area in the second plane through which the first plane penetrates is moved from the second plane to the first plane according to the motion performed by the user (Figs. 7E-7F; [0034], The second user B may view the virtual surface 600C, which has been annotated by the first user A, either substantially at the same time, in the shared virtual environment 550, or at a later time, and may further revise or otherwise annotate the virtual surface 600C. The second user B may use the information stored on the virtual surface 600E to further annotate the virtual surface 600C as shown in FIG. 7E. The further annotated virtual surface 600C/first plane as collided with second plane, may then be available for review by the first user A, either substantially real time in the shared virtual space 550, or at a later time by accessing the virtual surface 600C from storage as shown in FIG. 7F).
Regarding claim 20, Yamamoto of the combination of references further teaches the electronic system of claim 1, wherein at least one of the objects present in the virtual space is controlled by one or more of a plurality of users accessing the virtual space (Figs. 7A-7B; [0032]-[0033], Multiple virtual notation surfaces 600 (in particular, virtual surface 600A, 600B, 600C, 600D and 600E) may be available to the first user A and the second user B in the shared virtual environment 550, for access, review, revision, collaboration, and the like).
Regarding claim 21, Yamamoto discloses an electronic system (Fig. 3) comprising:
an electronic device configured to generate information based on a detected writing motion of writing with respect to the electronic device (Figs. 5A-5B and 7A-7C; [0026] and [0021], In some implementations, the user may annotate the virtual notional surface 600 of an virtual environment using a marking implement such as, for example, one or both of the electronic controllers 200A, 200B, a hand/finger whose position and orientation is tracked by the system, or other designated marking implement whose position and orientation is tracked by the system); and
a display device configured to display a scene in the virtual space corresponding to a viewpoint of the user and provide the displayed scene to the user ([0020], FIG. 4A illustrates a virtual scene 500, or virtual environment 500, viewed by the user, for example, on the display 140 of the HMD 100), wherein the display device is configured to
receive the generated writing information describing the detected physical writing motion from the electronic device,([0020]-[0021], In a system and method, in accordance with implementations described herein, the user may cause a virtual notation surface (or, hereinafter, simply a virtual surface, for ease of discussion) to be materialized in the virtual environment 500 displayed by HMD and transmitted by electronic controllers. The user may then annotate the virtual surface with, for example, text, sketches and the like, without leaving the virtual environment 500), and
display a reference object being included in the scene among objects arranged in the virtual space together with handwriting written on the reference object based on the writing information, a sensor configured to track at least one of both hands of the user (Figs. 5A-5B and 7A-7C; [0026] and [0021], In some implementations, the user may annotate the virtual notional surface 600 of an virtual environment using a marking implement such as, for example, one or both of the electronic controllers 200A, 200B, a hand/finger whose position and orientation is tracked by the system, or other designated marking implement whose position and orientation is tracked by the system where the annotation of virtual notional surface 600 can be among other objects 600A-600E);
wherein, in the virtual space, the reference object is arranged on a surface of the electronic device, and moved according to a hand movement from the surface of the electronic device, at least one of both hands of the user is tracked by the sensor to generate hand tracking information to be displayed in the virtual space, the one or more objects present in the virtual space are controlled based on the generated hand tracking information (Figs. 5D-5E, For example, as shown in FIG. 5D, with the first controller 200A positioned at a left edge of the virtual surface 600 and the second controller 200B positioned at a right edge of the virtual surface 600, the user may actuate the manipulation device 205A of the first controller 200A and the manipulation device 205B of the second controller 200B, and move the left and right edges outward).
However, Yamamoto does not teach a handheld electronic device with detected physical writing motion of writing on the handheld electronic device by a user using the handheld electronic device, wherein the display device is configured, receive the generated writing information describing the physical writing motion from the handheld electronic device, wherein the writing information is generated based on touch on the handheld electronic device of a pen or other writing implement controlled by the user, the reference object is arranged on the surface of the handheld electronic device according to a hand movement from the surface.
While Yamamoto teaches in [0002], A user may interact with objects in this virtual environment using various electronic devices, such as, for example, a helmet or other head mounted device including a display, glasses or goggles that a user looks through when viewing a display device, one or more handheld electronic devices such as controllers, joysticks and the like, gloves fitted with sensors, keyboards, mouse, and other electronic devices, Yamamoto did not limit that the writing motion was detected on the handheld device, such as by a pen or writing implement.
However, in the analogous art of virtual content display and manipulation, Poore teaches configured the head-mounted display to present information in addition to (i.e., overlaid with) the physical environment viewed by the user. A user interacted with a stylud while wearing the head-mounted display, wherein the stylus was configured as a touch-based input device that provides input to the electronic system and external (portable, handheld, smart phone) device based on contact with a tip at a terminal end of the stylus (Poore Fig. 1, [0021]-[0022]). The interface surface 50 may include a touch-sensitive panel or a digitizer to detect input when contacted by the stylus 300 that overlaps with a display screen of the external device 90, and inputs from the stylus 300 can be used to modify information on the display screen of the external device 90 (e.g., to provide navigational, pointing inputs, writing inputs, etc.). Alternatively, or in combination, the electronic system 100 can be configured to use inactive surfaces or any surfaces of the environment as the interface surface 50, for example, by transducing movements of the stylus 300 using sensors in the head-mounted device 200 and/or stylus 300, and converting such inputs into a visible response presented on the display 220 of the head-mounted device (Poore Fig. 1; [0023]-[0024]). It would have been obvious before the effective filing date of the invention to have substituted the electronic controller handheld device to handwriting inputs for the stylus and portable device for handwriting inputs for virtual space gesture and annotations as taught by Yamamoto. One having ordinary skill in the art would have been motivated to have configured the head-mounted display to present information in addition to (i.e., overlaid with) the physical environment viewed by the user. The physical environment included commonly used electronic devices that would be desirable to provide the user with additional tools with which the user can observe or otherwise interact with the physical environment while wearing a head-mounted device (Poore Fig. 1, [0021]-[0022] and [0014]).
Regarding claim 22, the above rejection of the electronic device in claim 1 stands for the corresponding operating method of an electronic device system claimed.
Claims 4-5 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication 2018/0158250 A1 by Yamamoto in view of U.S. Patent Publication 2020/0333891 A1 by Poore, and further in view of Foreign Patent Publication CN 111913562 A by Lu, for which a machine language translation is used for the citations below.
Regarding claim 4, Yamamoto does not teach the electronic system of claim 1, wherein, when the user performs, with the tracked hand, a motion of crumpling a target object among the one or more objects arranged in the virtual space and then releasing the target object, the target object is crumpled in the virtual space according to the motion of crumpling and falls onto a floor in the virtual space according to the motion of releasing.
While Yamamoto does teach in [0034], For example, in response to the note left by the first user A, suggesting the addition of a park, the second user B may also select the virtual surface 600E, including park information, as shown in FIG. 7D. The second user B may use the information stored on the virtual surface 600E to further annotate the virtual surface 600C, as shown in FIG. 7E. The further annotated virtual surface 600C may then be available for review by the first user A, either substantially real time in the shared virtual space 550, or at a later time by accessing the virtual surface 600C from storage as shown in FIG. 7F. This constitutes the folding into storage of annotated virtual object into a designated area of virtual space by a user gesture and unfolding for further access. Additionally, Yamamoto paragraph [0041] teaches for example, the user implement a rotating or twisting gesture, as shown in FIG. 7K, to move the selected virtual surface and clear space.
However, in the analogous art of virtual content display and manipulation, Lu teaches a dynamic virtual object was controlled to move in a direction of gravity with the acceleration of gravity to achieve dynamic display (Lu Page 33, first paragraph and Page 46, last paragraph). When a terminal device deleted the contents of a document, the dynamic virtual objects showed fragments moving in the direction of gravity and falling. (Page 47, first paragraph). A virtual trash can 305 can be controlled to display in an open display state, and the virtual paper content 310 can be controlled to enter the display state of the virtual trash can 305 for display (Lu Page 58, second paragraph). It would have been obvious before the effective filing date of the invention to have modified when things were cleared from virtual space in Yamamoto to show similar dynamic and static virtual objects to demonstrate the user that the virtual surface was being cleared or moved to storage. One having ordinary skill in the art would have been motivated to have enhanced the realism of the dynamic virtual object and showed the user the process occurring (Lu Page 46, last paragraph and Page 58, second paragraph).
Regarding claim 5, Yamamoto does not teach electronic system of claim 1, wherein, when the user performs, with the tracked hand, a motion of unfolding a crumpled target object in the virtual space, the target object is unfolded in the virtual space according to the motion of unfolding, and handwriting written on the target object is displayed.
While Yamamoto does teach in [0034], For example, in response to the note left by the first user A, suggesting the addition of a park, the second user B may also select the virtual surface 600E, including park information, as shown in FIG. 7D. The second user B may use the information stored on the virtual surface 600E to further annotate the virtual surface 600C, as shown in FIG. 7E. The further annotated virtual surface 600C may then be available for review by the first user A, either substantially real time in the shared virtual space 550, or at a later time by accessing the virtual surface 600C from storage as shown in FIG. 7F. This constitutes the folding into storage of annotated virtual object into a designated area of virtual space by a user gesture and unfolding for further access. Additionally, Yamamoto paragraph [0039] teaches as shown in FIG. 7H, the user implements a counterclockwise gesture to draw the selected virtual object 700A closer.
However, in the analogous art of virtual content display and manipulation, Lu teaches a dynamic virtual object was controlled to move in a direction of gravity with the acceleration of gravity to achieve dynamic display (Lu Page 33, first paragraph and Page 46, last paragraph). When a terminal device deleted the contents of a document, the dynamic virtual objects showed fragments moving in the direction of gravity and falling. (Page 47, first paragraph). A virtual trash can 305 can be controlled to display in an open display state, and the virtual paper content 310 can be controlled to enter the display state of the virtual trash can 305 for display (Lu Page 58, second paragraph). It would have been obvious before the effective filing date of the invention to have modified when things were cleared from virtual space in Yamamoto to show similar dynamic and static virtual objects to demonstrate the user that the virtual surface was being cleared or moved to storage. One having ordinary skill in the art would have been motivated to have enhanced the realism of the dynamic virtual object and showed the user the process occurring (Lu Page 46, last paragraph and Page 58, second paragraph).
Claims 9-13, 17, and 19 are rejected under 35 U.S.C. 103 as being unpatentable over U.S. Patent Publication 2018/0158250 A1 by Yamamoto in view of U.S. Patent Publication 2020/0333891 A1 by Poore, and further in view of U.S. Patent Publication 2019/0362562 A1 by Benson.
Regarding claim 9, Yamamoto in view of Lu does not teach the electronic system of claim 1, wherein, when the user performs a pinch gesture with one of both tracked hands and then performs a pinch gesture with the other hand, a non-directional link that connects the hands is generated in the virtual space.
In the analogous art of 3d sensory space manipulation, Benson teaches a method of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space is described. The method includes capturing an image of the hands in a three-dimensional (3D) sensory space and sensing a location of the hands, incorporating at least part the image of the hands into a virtual reality scene, outlining a modeled position of the location of the hands and incorporating the outline into the virtual reality scene (Benson [0376]). A pinched gesture of one hand followed by a pinch gesture of a second hand led to a non-directional link between the two pinched areas (Benson Fig. 35). It would have been obvious before the effective filing date of the invention of the have incorporated the gesture of Benson into Yamamoto’s system for manipulating virtual space. One having ordinary skill in the art would have been motivated to have capable of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space (Benson Figs. 35-36 and [0382]).
Regarding claim 10, Yamamoto in view of Lu does not teach the electronic system of claim 1, wherein, when the user performs a pinch gesture with one of both tracked hands and then performs a pinch gesture while moving the other hand in one direction, a directional link that connects the hands in the virtual space and has an arrow displayed in a portion corresponding to the other hand is generated.
In the analogous art of 3d sensory space manipulation, Benson teaches a method of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space is described. The method includes capturing an image of the hands in a three-dimensional (3D) sensory space and sensing a location of the hands, incorporating at least part the image of the hands into a virtual reality scene, outlining a modeled position of the location of the hands and incorporating the outline into the virtual reality scene (Benson [0376]). A pinched gesture of one hand followed by a pinch gesture of a second hand led to a non-directional link between the two pinched areas (Benson Fig. 35). A non-directional link could then be thrown where the projectile was indicated with a projection interface item that was a trajectory of the velocity and direction of the interface object (Benson Fig. 36; [0384]). It would have been obvious before the effective filing date of the invention of the have incorporated the gesture of Benson into Yamamoto’s system for manipulating virtual space. One having ordinary skill in the art would have been motivated to have capable of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space (Benson Figs. 35-36 and [0382]).
Regarding claim 11, Yamamoto in view of Lu does not teach the electronic system of claim 9, wherein, when the user releases the pinch gesture within a predetermined distance, for two target objects arranged in the virtual space, with both tracked hands in a state in which the non-directional link or the directional link is generated, the non-directional link or the directional link connects the two target objects in the virtual space.
In the analogous art of 3d sensory space manipulation, Benson teaches a method of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space is described. The method includes capturing an image of the hands in a three-dimensional (3D) sensory space and sensing a location of the hands, incorporating at least part the image of the hands into a virtual reality scene, outlining a modeled position of the location of the hands and incorporating the outline into the virtual reality scene (Benson [0376]). A pinched gesture of one hand followed by a pinch gesture of a second hand led to a non-directional link between the two pinched areas that were highlighted with concentric circles as target objects (Benson Fig. 35). It would have been obvious before the effective filing date of the invention of the have incorporated the gesture of Benson into Yamamoto’s system for manipulating virtual space. One having ordinary skill in the art would have been motivated to have capable of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space (Benson Figs. 35-36 and [0382]).
Regarding claim 12, Yamamoto in view of Lu does not teach the electronic system of claim 1, wherein, when the user performs, with the tracked hand, a motion of holding and pulling a link that connects two target objects arranged in the virtual space by a predetermined distance or greater, the link is deleted from the virtual space.
In the analogous art of 3d sensory space manipulation, Benson teaches a method of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space is described. The method includes capturing an image of the hands in a three-dimensional (3D) sensory space and sensing a location of the hands, incorporating at least part the image of the hands into a virtual reality scene, outlining a modeled position of the location of the hands and incorporating the outline into the virtual reality scene (Benson [0376]). A pinched gesture of one hand followed by a pinch gesture of a second hand led to a non-directional link between the two pinched areas (Benson Fig. 35). A non-directional link could then be grasped and thrown where the projectile was indicated with a projection interface item that was a trajectory of the velocity and direction of the interface object, deleting it from near virtual space or the visible virtual environment (Benson Fig. 36; [0384]). It would have been obvious before the effective filing date of the invention of the have incorporated the gesture of Benson into Yamamoto’s system for manipulating virtual space. One having ordinary skill in the art would have been motivated to have capable of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space (Benson Figs. 35-36 and [0382]).
Regarding claim 13, Yamamoto in view of Lu does not teach the electronic system of claim 1, wherein, when the user performs, with the tracked hand, a motion of holding a tag object in the virtual space, moving the tag object to be within a predetermined distance to a link that connects two target objects, and then releasing the tag object, the tag object is arranged at a predetermined angle in the link according to the motion of releasing.
In the analogous art of 3d sensory space manipulation, Benson teaches a method of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space is described. The method includes capturing an image of the hands in a three-dimensional (3D) sensory space and sensing a location of the hands, incorporating at least part the image of the hands into a virtual reality scene, outlining a modeled position of the location of the hands and incorporating the outline into the virtual reality scene (Benson [0376]). A pinched gesture of one hand followed by a pinch gesture of a second hand led to a non-directional link between the two pinched areas (Benson Fig. 35). A non-directional link could then be grasped and thrown where the projectile was indicated with a projection interface item that was a trajectory of the velocity and direction of the interface object, tagging this motion at angle of the trajectory (Benson Fig. 36; [0384]). It would have been obvious before the effective filing date of the invention of the have incorporated the gesture of Benson into Yamamoto’s system for manipulating virtual space. One having ordinary skill in the art would have been motivated to have capable of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space (Benson Figs. 35-36 and [0382]).
Regarding claim 17, Yamamoto in view of Lu does not teach the electronic system of claim 1, wherein, when the user performs a motion of touching a target object in a plane arranged in the virtual space with one of both tracked hands and moving another object with the other hand while the one hand is touching the target object such that the other object is aligned with the target object on the plane, the other object is aligned with the target object on the plane in the virtual space
In the analogous art of 3d sensory space manipulation, Benson teaches a method of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space is described. The method includes capturing an image of the hands in a three-dimensional (3D) sensory space and sensing a location of the hands, incorporating at least part the image of the hands into a virtual reality scene, outlining a modeled position of the location of the hands and incorporating the outline into the virtual reality scene (Benson [0376]). A virtualized rendering was presented by user’s left hand had a bubble interface element selected by user’s right hand that was touched to bring up a paint brush control panel that was moveable by other hand in alignment with the virtualized rendering of left hand (Benson Figs. 45-46, [0383]). It would have been obvious before the effective filing date of the invention of the have incorporated the gesture of Benson into Yamamoto’s system for manipulating virtual space. One having ordinary skill in the art would have been motivated to have capable of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space (Benson Figs. 35-36 and [0382]).
Regarding claim 19, Yamamoto in view of Lu does not teach an electronic system of claim 1, wherein, when the user performs, with the tracked hand, a motion of holding a first plane in the virtual space and bringing the first plane to be within a predetermined distance to a second plane to which one or more objects are attached an object in an area in the second plane corresponding to the first plane is projected onto the first plane in the virtual space, and when the user performs a motion of touching the object projected on the first plane, the object corresponding to the motion of touching is duplicated on the first plane in the virtual space.
In the analogous art of 3d sensory space manipulation, Benson teaches a method of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space is described. The method includes capturing an image of the hands in a three-dimensional (3D) sensory space and sensing a location of the hands, incorporating at least part the image of the hands into a virtual reality scene, outlining a modeled position of the location of the hands and incorporating the outline into the virtual reality scene (Benson [0376]). A virtualized rendering was presented by user’s left hand had a bubble interface element selected by user’s right hand that was touched to create a second bubble that was moveable in second plane by right hand where bubble was duplicated in first plane corresponding to left hand (Benson Figs. 49-50, [0383]). It would have been obvious before the effective filing date of the invention of the have incorporated the gesture of Benson into Yamamoto’s system for manipulating virtual space. One having ordinary skill in the art would have been motivated to have capable of manipulating virtual objects using real motions of one or more hands in a three-dimensional (3D) sensory space (Benson Figs. 35-36 and [0382]).
Conclusion
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
U.S. Patent Publication 2020/0387287 A1 by Ravasz et al. teaches outputting artificial reality content; detecting movement of a stylus; detecting a stylus selection action; after detecting the stylus selection action, detecting further movement of the stylus; generating stylus movement content in response to detecting movement of the stylus; generating a UI input element in response to detecting the stylus selection action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to MAHEEN I JAVED whose telephone number is (571)272-0825. The examiner can normally be reached on Mon-Fri 9:00 am-5:00 pm ET.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, AMR AWAD can be reached on 571-272-7764. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/MAHEEN I JAVED/Examiner, Art Unit 2621
/AMR A AWAD/Supervisory Patent Examiner, Art Unit 2621