DETAILED ACTION
This communication is responsive to Amendment filed 09/04/2025.
Claims 25-35, 37-41 and 43-46 are pending in this application. In the Amendment, claims 25, 29-31, 34-35 and 37 are amended, 36 and 42 are cancelled and 45-46 are new. This action is made Final.
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Arguments
Applicant’s arguments with respect to claims amended 09/04/2025 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument.
Claim Interpretation
The claim limitations “means for projecting” or “projecting means” and “means for sensing movement of a user’ or “sensing means” in Claims 37-41 and 43-44 remain interpreted under 35 U.S.C. 112(f) as discussed in the previous Office action.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 25, 30-31 and 45-46 are rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al. (“Mistry”, US 2010/0199232) in view of Theimer et al. (“Theimer”, US 2014/0225915).
As per claim 25, Mistry teaches at least one memory comprising machine-readable instructions to cause at least one processor circuit (Mistry, para.72-73, 83, Fig.3, computer 13) to:
cause a first wearable device to project the virtual wearable accessory on the portion of the body of the user, the virtual wearable accessory to display first content (Mistry, Fig.11-12, para.5-6, 9, 31, 91-92, projector displays watch image/dial-pad on wrist);
detect a gesture performed by the user based on one or more signals output by at least one sensor associated with the body of the user (Mistry, Fig.12, para.5, 10-12, 73, 93-100, vision engine recognizes gestures captured by camera); and
in response to determining that the gesture corresponds to a first command for the virtual wearable accessory (Mistry, para.9, 31, 73, 77-80, 113, 124, Fig.26, perform action according to gesture), cause the first wearable device to project the virtual wearable accessory to display second content, the second content associated with the first command, the second content different than the first content (Mistry, para.31, 73, 77-80, 92, 113, 128, Fig.28, second content may be retrieved from Internet in response to gestures interacting with app).
However, Mistry does not teach to identify a portion of a body of a user relative to a physical wearable accessory worn by the user as a location on which to project a virtual wearable accessory and the first wearable device different than the physical wearable accessory. Theimer teaches a wearable display to identify a portion (Theimer, para.34; Fig.2, image receiving area 210) relative to a physical wearable accessory worn by the user (Theimer, para.37-40; Fig.2, receiving area portion 210 relative to reflective markers 212/214/216/218) as a location on which to project a virtual wearable accessory (Theimer, para.23, 34, projected image) and the first projecting device is different than the physical wearable accessory (Theimer, para.18, 21; Fig.2, projector 204 separated from device 202). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Theimer’s teaching with Mistry’s memory in order to display the projected image with the proper size and angles without distortion (Theimer, para.22-25, 43).
As per claim 30, the memory of Mistry and Theimer teaches the at least one memory of claim 25, wherein the machine- readable instructions, are to cause one or more of the at least one processor circuit to:
detect a presence of the physical wearable accessory worn on the portion of the body of the user based on one or more of image data including the user or the one or more signals output by the at least one sensor (Theimer, para.21, 42, sensor on projector); and
identify the location on the portion of the body based on a boundary of the physical wearable accessory (Theimer, para.37-40; Fig.2, receiving area portion 210 relative to reflective markers 212/214/216/218; Mistry, para.91-92, Fig.11-12, watch/dial-pad projected on body).
As per claim 31, Mistry teaches an apparatus comprising:
memory (Mistry, para.20, computer memory);
machine-readable instructions (Mistry, para.19-20, 32-33, computer instructions); and
at least one processor circuit to be programmed by the machine-readable instructions to:
cause a first wearable device to project the virtual wearable accessory on the portion of the body of the user, the virtual wearable accessory to display first content (Mistry, Fig.11-12, para.5-6, 9, 31, 91-92, projector displays watch image/dial-pad on wrist);
detect a gesture performed by the user based on the one or more outputs of the one or more sensors (Mistry, Fig.12, para.5, 10-12, 73, 93-100, vision engine recognizes gestures captured by camera); and
in response to determining that the gesture corresponds to a first command for the virtual wearable accessory (Mistry, para.9, 31, 73, 77-80, 113, 124, Fig.26, perform action according to gesture), cause the first wearable device to project the virtual wearable accessory to display second content, the second content associated with the first command, the second content different than the first content (Mistry, para.31, 73, 77-80, 92, 113, 128, Fig.28, second content may be retrieved from Internet in response to gestures interacting with app).
However, Mistry does not teach to detect a first location of a physical wearable accessory worn on a body of a user based on one or more outputs of one or more sensors associated with the body of the user; determine a second location on a portion of the body of the user relative to the first location on which to project a virtual wearable accessory; and the first wearable device different than the physical wearable accessory. Theimer teaches a wearable display to detect a first location of a physical wearable accessory worn on a body of a user based on one or more outputs of one or more sensors (Theimer, para.21, 37, 42, sensor on projector, reflective markers 212/214/216/218); determine a second location on a portion relative to the first location (Theimer, para.34, 37; Fig.2, image receiving area portion 210 relative to reflective markers 212/214/216/218) on which to project a virtual wearable accessory (Theimer, para.23, 34, projected image) and the first projecting device is different than the physical wearable accessory (Theimer, para.18, 21; Fig.2, projector 204 separated from device 202). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Theimer’s teaching with Mistry’s apparatus in order to display the projected image with the proper size and angles without distortion (Theimer, para.22-25, 43).
As per claim 45, the apparatus of Mistry and Theimer teaches the apparatus of claim 31, wherein the one or more outputs of the one or more sensors corresponds to at least one of image data or depth data (Theimer, para.21, 37, 42, sensor on projector, reflective markers 212/214/216/218; Mistry, Fig.3, camera 11; para.9, 65, 91-92; image projected on body).
As per claim 46, the apparatus of Mistry and Theimer teaches the apparatus of claim 31, wherein a portion of the virtual wearable accessory is adjacent an edge of the physical wearable accessory (Theimer, para.37-40; Fig.2, receiving area portion 210 adjacent to reflective markers 212/214/216/218; Mistry, para.91-92, Fig.11-12, watch/dial-pad projected on body).
Claims 26 and 32 are rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al. (“Mistry”, US 2010/00199232) and Theimer et al. (“Theimer”, US 2014/0225915) in view of Perez et al. (“Perez”, US 9,122,321) and further in view of Maciocci et al. (“Maciocci”, US 2012/0249741).
As per claim 26, the memory of Mistry and Theimer teaches the at least one memory of claim 25, however does not teach wherein the machine-readable instructions, are to cause one or more of the at least one processor circuit to: determine an orientation of a face of the user relative to the virtual wearable accessory. Perez teaches a wearable device medium that determines an orientation of a face of the user relative to the virtual wearable accessory (Perez, col.1, lines 29-35; col.10, lines 28-31; col.18, lines 7-25; col.18, line 66-col.19, line 5; col.24, lines 8-12; orientation and gaze of user’s field of view; col.5, lines 32-38, gesture to perform actions). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Perez’s teaching with the memory of Mistry and Theimer in order to determine which object the user is focusing on.
Furthermore, the memory of Mistry, Theimer and Perez does not teach to verify that the gesture corresponds to the first command for the virtual wearable accessory based on the orientation of the face of the user. Maciocci teaches a virtual input medium wherein a gesture corresponds to a command for the virtual wearable accessory based on the orientation of the face of the user (Maciocci, para.69, input by focused gaze). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Maciocci’s teaching with the memory of Mistry, Theimer and Perez in order to perform commands with little input.
Claim 32 is similar in scope to claim 26, and is therefore rejected under similar rationale.
Claims 27 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al. (“Mistry”, US 2010/00199232) and Theimer et al. (“Theimer”, US 2014/0225915) in view of Maciocci et al. (“Maciocci”, US 2012/0249741).
As per claim 27, the memory of Mistry and Theimer teaches the at least one memory of claim 25, however does not teach wherein (a) in response to the gesture occurring in a first orientation relative to the virtual wearable accessory, the gesture is to correspond to a first interaction with the second content and (b) in response to the gesture occurring in a second orientation relative to the virtual wearable accessory, the gesture is to correspond to a second interaction with the second content, the second orientation different than the first orientation. Maciocci teaches a virtual input medium wherein in response to the gesture occurring in a first orientation relative to the virtual wearable accessory, the gesture is to correspond to a first interaction with the second content and in response to the gesture occurring in a second orientation relative to the virtual wearable accessory, the gesture is to correspond to a second interaction with the second content, the second orientation different than the first orientation (Maciocci, para.69, gesture orientations determines interaction with objects). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Maciocci’s teaching with the memory of Mistry and Theimer in order to easily communicate multiple commands to the objects using a single means.
Claim 33 is similar in scope to claim 27, and is therefore rejected under similar rationale.
Claims 28 and 34 are rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al. (“Mistry”, US 2010/0199232) and Theimer et al. (“Theimer”, US 2014/0225915) in view of Bar-Zeev et al. (“Bar-Zeev”, US 2012/0127284).
As per claim 28, the memory of Mistry and Theimer teaches the at least one memory of claim 25, wherein the second content includes a virtual object (Mistry, para.7, 73, 91-92), however does not teach to determine a velocity of the gesture based on one or more of image data including the user or the one or more signals output by the at least one sensor; and cause the display of the virtual object to move relative to the virtual wearable accessory based on the velocity of the gesture. Bar-Zeev teaches a virtual input medium that includes determining a velocity of a gesture based on image data including the user or signals output by the at least one sensor (Bar-Zeev, para.59, sensors 132); and cause the display of a virtual object to move relative to the virtual wearable accessory based on the velocity of the gesture (Bar-Zeev, para.97-100, 155-156, gesture velocity affects virtual object movement). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Bar-Zeev’s teaching with the memory of Mistry and Theimer in order to distinguish gestures based on movement.
Claim 34 is similar in scope to claim 28, and is therefore rejected under similar rationale.
Claims 29 and 35 are rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al. (“Mistry”, US 2010/0199232) and Theimer et al. (“Theimer”, US 2014/0225915) in view of Border et al. (“Border”, US 9,299,194) and further in view of Perez et al. (“Perez”, US 9,122,321).
As per claim 29, the memory of Mistry and Theimer teaches the at least one memory of claim 25, however does not teach wherein the first command is an authentication command to enable access to the second content and the machine-readable instructions are to cause one or more of the at least one processor circuit to: identify a second wearable device as being an authenticated device and permit access by the second wearable device to the virtual wearable accessory projected by the first wearable device with the display of the second content.
Border teaches a virtual input medium wherein a command is an authentication command to enable access to second content (Border, col.14, line 53-col.15, line 20, access to specific content based on eyes). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Border’s teaching with the memory of Mistry and Theimer in order to provide secure sharing of content.
Furthermore, the memory of Mistry, Theimer and Border does not teach to identify a second wearable device as being an authenticated device; and permit access by the second wearable device to the virtual wearable accessory projected by the first wearable device with the display of the second content. Perez teaches a medium for collaboration of users to identify a second wearable device as being an authenticated device (Perez, Fig.9, user 29B wants to access virtual object of user 29A; col.21, line 62-col.22, lines 6, second user with own display can view virtual object of first user; col.22, lines 43-55, col.26, lines 40-58, access permissions set for each user); and permit access by the second wearable device to the virtual wearable accessory projected by the first wearable device with the display of the second content. (Perez, col.21, lines 12-43, col.24, lines 31-41, 55-63). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Perez’s teaching with the memory of Mistry, Theimer and Border in order to collaborate with multiple users in manipulating virtual objects.
Claim 35 is similar in scope to claim 29, and is therefore rejected under similar rationale.
Claims 37-40 are rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al. (“Mistry”, US 2010/0199232) in view of Bar-Zeev et al. (“Bar-Zeev”, US 2012/0127284).
As per claim 37, Mistry teaches an apparatus comprising:
first wearable means for projecting (Mistry, Fig.11-12, para.5-6, 9, 31, 91-92, projector displays watch image/dial-pad on wrist);
means for sensing movement of a user (Mistry, Fig.12, para.5, 10-12, 73, 93-100, vision engine recognizes gestures captured by camera); and
at least one processor circuit to:
cause the first wearable projecting means to project a virtual wearable accessory on a portion of a body of the user, the virtual wearable accessory to display first content (Mistry, Fig.11-12, para.5-6, 9, 31, 91-92, projector displays watch image/dial-pad on wrist);
detect a gesture performed by the user based on one or more signals output by the sensing means (Mistry, Fig.12, para.5, 10-12, 73, 93-100, vision engine recognizes gestures captured by camera);
in response to determining that the gesture corresponds to a first command for the virtual wearable accessory (Mistry, para.9, 31, 73, 77-80, 113, 124, Fig.26, perform action according to gesture), cause the first wearable projecting means to project the virtual wearable accessory to display second content, the second content associated with the first command, the second content different than the first content (Mistry, para.31, 73, 77-80, 92, 113, 128, Fig.28, second content may be retrieved from Internet in response to gestures interacting with app), the second content including a virtual object (Mistry, para.7, 73, 91-92).
However, Mistry does not teach to determine a velocity of the gesture as being a first velocity or a second velocity, the first velocity different than the second velocity; and cause the virtual object to move relative to the virtual wearable accessory at (a) a third velocity based on the first velocity when the velocity of the gesture corresponds to the first velocity or (b) a fourth velocity based on the second velocity when the velocity of the gesture corresponds to the second velocity, the fourth velocity and the third velocity being different non-zero velocities. Bar-Zeev teaches a virtual input apparatus that includes determining a velocity of a gesture (Bar-Zeev, para.59, sensors 132 determine velocity); and cause the virtual object to move relative to the virtual wearable accessory based on the velocity of the gesture (Bar-Zeev, para.97-100, 155-156, gesture velocity affects virtual object movement). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Bar-Zeev’s teaching with Mistry’s apparatus in order to distinguish gestures based on movement.
As per claim 38, the apparatus of Mistry and Bar-Zeev teaches the apparatus of claim 37, wherein the first wearable projecting means is carried by a head-mounted device to be worn by the user (Bar-Zeev, para.58; Fig.1, head-mounted device 2).
As per claim 39, the apparatus of Mistry and Bar-Zeev teaches the apparatus of claim 37, wherein the sensing means includes an accelerometer (Bar-Zeev, para.59, Fig.3, accelerometer 132C).
As per claim 40, the apparatus of Mistry and Bar-Zeev teaches the apparatus of claim 37, further including a camera to output image data including the portion of the body of the user, wherein one or more of the at least one processor circuit is to cause the first wearable projecting means to project the virtual wearable accessory on the portion of the body of the user based on the image data (Mistry, Fig.3, camera 11; para.9, 65, 91-92; image projected on body).
Claim 41 is rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al. (“Mistry”, US 2010/0199232) and Bar-Zeev et al. (“Bar-Zeev”, US 2012/0127284) in view of Theimer et al. (“Theimer”, US 2014/0225915).
As per claim 41, the apparatus of Mistry and Bar-Zeev teaches the apparatus of claim 40, wherein the one or more of the at least one processor circuit to: cause the first wearable device to project the virtual wearable accessory on a location of the portion of the body of the user (Mistry, para.91-92, Fig.11-12, watch/dial-pad projected on body). However, the apparatus of Mistry and Bar-Zeev does not teach to detect a presence of the physical wearable accessory worn on the portion of the body of the user based on one or more of image data; and cause the first wearable device to project the virtual wearable accessory on a location of the portion of the body of the user proximate to the physical wearable accessory.
Theimer teaches a wearable display to detect a presence of the physical wearable accessory (Theimer, para.37-40; Fig.2, reflective markers 212/214/216/218) worn on the portion of the body of the user based on one or more of image data (Theimer, para.21, 42, sensor on projector) and cause the first wearable device to project the virtual wearable accessory (Theimer, para.23, 34, projected image) on a location (Theimer, para.34; Fig.2, image receiving area 210) proximate to the physical wearable accessory (Theimer, para.37-40; Fig.2, receiving area portion 210 proximate to reflective markers 212/214/216/218). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Theimer’s teaching with the apparatus of Mistry and Bar-Zeev in order to display the projected image with the proper size and angles without distortion (Theimer, para.22-25, 43).
Claim 43 is rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al. (“Mistry”, US 2010/0199232) and Bar-Zeev et al. (“Bar-Zeev”, US 2012/0127284) in view of Perez et al. (“Perez”, US 9,122,321) and further in view of Maciocci et al. (“Maciocci”, US 2012/0249741).
As per claim 43, the apparatus of Mistry and Bar-Zeev teaches the apparatus of claim 37, however does not teach wherein one or more of the at least one processor circuit is to: determine an orientation of a face of the user relative to the virtual wearable accessory. Perez teaches a wearable device that determines an orientation of a face of the user relative to the virtual wearable accessory (Perez, col.1, lines 29-35; col.10, lines 28-31; col.18, lines 7-25; col.18, line 66-col.19, line 5; col.24, lines 8-12; orientation and gaze of user’s field of view; col.5, lines 32-38, gesture to perform actions). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Perez’s teaching with the apparatus of Mistry and Bar-Zeev in order to determine which object the user is focusing on.
Furthermore, the apparatus of Mistry, Bar-Zeev and Perez does not teach to verify that the gesture corresponds to the first command for the virtual wearable accessory based on the orientation of the face of the user. Maciocci teaches a virtual input device wherein a gesture corresponds to a command for the virtual wearable accessory based on the orientation of the face of the user (Maciocci, para.69, input by focused gaze). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Maciocci’s teaching with the apparatus of Mistry, Bar-Zeev and Perez in order to perform commands with little input.
Claim 44 is rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al. (“Mistry”, US 2010/0199232) and Bar-Zeev et al. (“Bar-Zeev”, US 2012/0127284) in view of Border et al. (“Border”, US 9,299,194).
As per claim 44, the apparatus of Mistry and Bar-Zeev teaches the apparatus of claim 37, however does not teach wherein the first command is an authentication command to enable access to the second content.
Border teaches a virtual input device wherein a command is an authentication command to enable access to second content (Border, col.14, line 53-col.15, line 20, access to specific content based on eyes). It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to include Border’s teaching with the apparatus of Mistry and Bar-Zeev in order to provide secure sharing of content.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
The prior art made of record and not relied upon is considered pertinent to applicant's disclosure.
Toney et al. (US 2012/0249409) teaches a method of displaying interfaces on a user’s body from a wearable device.
Inquiries
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAJEDA MUHEBBULLAH whose telephone number is (571)272-4065. The examiner can normally be reached Mon-Tue/Thur-Fri 10am-8pm.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L Bashore can be reached at 571-272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/S.M./
Sajeda MuhebbullahExaminer, Art Unit 2174
/WILLIAM L BASHORE/ Supervisory Patent Examiner, Art Unit 2174