DETAILED ACTION
I. Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
II. Response to Amendment
The response, filed November 19, 2025, has been entered and made of record. Claims 1-20 are pending in the application. Claims 12-18 have been withdrawn from consideration.
III. Claim Rejections - 35 USC § 103
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
A. Claims 1-3,6,7,10,19, and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Bostick et al. (US 2018/0234622 A1) in view of the Chinese publication of Yang (Chinese publication number: CN 210427993 U)
Please refer to the translated copy of Yang attached to this action for the cited paragraph and line numbers as they differ from the original Chinese document. Also, note that Yang used in the current rejection is different from Wang used in the previous rejection.
As to claim 1, Bostick et al. teaches a method (Fig. 5) of using sensor data ([0031]) from a wrist-wearable device (Fig. 2, smart device “204”; [0030], lines 1 and 2) to monitor image-capture trigger conditions for determining when to capture images ([0019]) using an imaging device of a mobile device (Fig. 2, image capture device “202”; [0028], lines 4 and 5), the method comprising:
receiving, from a wrist-wearable device communicatively coupled to the mobile device, sensor data (Fig. 2, block “506”; [0042]), wherein the wrist-wearable device are worn by a user (Fig. 4);
determining, based on the sensor data received from the wrist-wearable device ([0042]; {The claimed sensor data can be read as either the emotion notification itself received by the image capture device from the smart device “204” or the biometric data sensed by the biometrics sensors and received by a processor (e.g., Fig. 1, processor “105”) in the smart device “204” from the biometric sensors in the smart device. Note that the claims do not specifically require that the head-wearable device receives the sensor data.}) and without receiving an instruction from the user to capture an image, whether an image-capture trigger condition for the head-wearable device is satisfied (Fig. 5, block “508”; [0043]); and
in accordance with a determination that the image-capture trigger condition for the head-wearable device is satisfied, instructing an imaging device of the head-wearable device to capture image data (Fig. 5, block “510”; [0044]).
Claim 1 differs from Bostick et al. in that it requires that the image capture/mobile device be a head-wearable device worn by the user including a frame with at least one lens, although the reference notably discloses, in para. [0019], that the image capture device may be positioned on a selfie stick. However, in the same field of endeavor as the instant application, Yang teaches smart glasses including a frame that rests on a user’s face as optical glasses normally do (Fig. 1, glasses body “100”). The frame includes a conventional extension part that rests on a user’s ear (Fig. 2, temple “120”) and a supporting body (Fig. 2, supporting body “210”) that is rotatably connected to the extension part (p. 3, lines 27-30). A camera lens is secured at a distal end of the supporting body (Fig. 6; p. 3, line 27), and after rotating the supporting body, the user can use the camera to capture a selfie (p. 4, line 33).
In light of the teaching of Yang, the examiner submits that it would have been obvious to one of ordinary skill in the art before effective filing date of the instant application to design Bostick’s image capture device for use in Yang’s smart glasses, where the supporting body can be rotated to take a selfie of the user and, therefore, allow for emotion-triggered image capture while the image capture device is angled toward the user. One of ordinary skill in the art would recognize that this configuration would be particularly useful while the user is a spectator of a live sporting or musical event and would alleviate the need for the user to handle a selfie stick, which can be unwieldy and tiring.
As to claim 2, Bostick et al., as modified by Yang, teaches the method of claim 1, wherein:
the sensor data received from the wrist-wearable device is from a first type of sensor, and
the head-wearable device does not include the first type of sensor (see Bostick et al., [0031], lines 1-3).
As to claim 3, Bostick et al., as modified by Yang, teaches the method of claim 1, further comprising:
receiving, from the wrist-wearable device that is communicatively coupled to the head-wearable device, additional sensor data;
determining, based on the additional sensor data received from the wrist-wearable device, whether an additional image-capture trigger condition for the head-wearable device is satisfied, the additional image-capture trigger condition being distinct from the image-capture trigger condition; and
in accordance with a determination that the additional image-capture trigger condition for the head-wearable device is satisfied, instructing the imaging device of the head-wearable device to capture additional image data (see Bostick et al., Fig. 6; [0041]; {For example, any change in emotion would result in an image-capture trigger and that trigger would be produced with multiple changes in emotional state.}).
As to claim 6, Bostick et al., as modified by Yang, teaches the method of claim 1, wherein the determination that the image-capture trigger condition is satisfied is further based on sensor data from one or more sensors of the head-wearable device (see Bostick et al., Fig. 5, step “508”; [0043]; {The sensor of the image capture device is an image sensor that captures an image on which emotional analysis is performed.}).
As to claim 7, Bostick et al., as modified by Yang, teaches the method of claim 1, wherein the determination that the image-capture trigger condition is satisfied is further based on identifying, using data from one or both of the imaging device of the head-wearable device and an imaging device of the wrist-wearable device, a predefined object within a field of view of the user (see Bostick et al., Fig. 5, step “508”; [0043]; {The predefined object may be a tear, for example. Moreover, the examiner reads the “field of view of the user” as the field of view of the camera, with which the user seeks to capture emotional images.}).
As to claim 10, Bostick et al., as modified by Yang, teaches the method of claim 1, wherein the image-capture trigger condition is determined to be satisfied based on one or more of a target heartrate detected using the sensor data of the wrist-wearable device (see Bostick et al., [0031], lines 3-6; {The examiner reads the claimed target heart rate as the heart rate that would trigger an excited emotion.}), a target distance during an exercise activity being monitored in part with the sensor data, a target velocity during an exercise activity being monitored in part with the sensor data, a target duration, a user-defined location detected using the sensor data, a user-defined elapsed time monitored in part with the sensor data, image recognition performed on image data included in the sensor data, and position of the wrist-wearable device and/or the head-wearable device detected in part using the sensor data.
The combination of Bostick et al. and Yang detailed above in the rejection of claim 1 forms the basis for the rejections of claims 19 and 20 that follow.
As to claim 19, Bostick et al., as modified by Yang, teaches a wrist-wearable device (see Bostick et al., smart device “204”; [0030], lines 1 and 2) configured to use sensor data to monitor image-capture trigger conditions for determining when to capture images (see Bostick et al., [0019]) using a communicatively coupled imaging device (see Bostick et al., Fig. 2, image capture device “202”; [0028], lines 4 and 5), the wrist-wearable device comprising:
a display (see Bostick et al., Fig. 3);
one or more sensors (see Bostick et al., [0031]); and
one or more processors (see Bostick et al., [0021]) configured to:
receive, from the one or more sensors, sensor data (see Bostick et al., Fig. 5, block “506”; [0042]);
determine, based on the sensor data, whether an image-capture trigger condition for a communicatively coupled head-wearable device (see Yang, smart glasses of Figs. 1 and 2) is satisfied (see Bostick et al., Fig. 5, block “508”; [0043]), wherein the head-wearable device includes a frame with at least one lens (see Yang, Fig. 6; p. 3, line 27); and
in accordance with a determination that the image-capture trigger condition for the communicatively coupled head-wearable device is satisfied, instruct an imaging device of the communicatively coupled head-wearable device to capture image data (see Bostick et al., Fig. 5, block “510”; [0044]).
As to claim 20, Bostick et al., as modified by Yang, teaches a non-transitory, computer-readable storage medium including instructions that, when executed by a wrist-wearable device, cause the wrist-wearable device (see Bostick et al., [0021]-[0023]) to:
receive, via one or more sensors communicatively coupled with the wrist-wearable device (see Bostick et al., [0031]), sensor data (see Bostick et al., Fig. 5, block “506”, [0042]);
determine, based on the sensor data, whether an image-capture trigger condition for a communicatively coupled head-wearable device (see Yang, smart glasses of Figs. 1 and 2) is satisfied (see Bostick et al., Fig. 5, block “508”; [0043]); and
in accordance with a determination that the image-capture trigger condition for the head-wearable device is satisfied, instruct an imaging device of the head-wearable device to capture image data (see Bostick et al., Fig. 5, block “510”; [0044]).
B. Claim 9 is rejected under 35 U.S.C. 103 as being unpatentable over
Bostick et al. (US 2018/0234622 A1) in view of Yang (Chinese publication number: CN 210427993 U) and further in view of Park (US 2021/0224522 A1)
As to claim 9, Bostick et al., as modified by Yang, teaches the method of claim 1. The claim differs from Bostick et al., as modified by Yang, in that it requires that, in accordance with the determination that the image-capture trigger condition is satisfied, instructing the wrist-wearable device to store information concerning the user's performance of an activity for association with the image data captured using the imaging device of the head-wearable device.
However, in the same field of endeavor as the instant application, Park discloses a wearable device (Fig. 2A; [0050], lines 9-13) that may store a captured image and an emotion associated with the user in the image determined through image analysis ([0064] and [0065]). In light of the teaching of Park, the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to store the captured image and the emotion information at one or both of the smart device and image capture device of Bostick et al., as modified by Yang, as this would allow the user to more easily access images associated with specific emotions.
C. Claim 11 is rejected under 35 U.S.C. 103 as being unpatentable over Bostick et al. (US 2018/0234622 A1) in view of Yang (Chinese publication number:
CN 210427993 U) and further in view of Cucci et al. (US # 9,628,708 B2)
As to claim 11, Bostick et al., as modified by Yang, teaches the method of claim 1. The claim differs from Bostick et al., as modified by Yang, in that it requires that instructing the imaging device of the head-wearable device to capture the image data includes instructing the imaging device of the head-wearable device to capture a plurality of images.
However, in the same field of endeavor as the instant application, Cucci et al. discloses a mobile device (Figs. 1 and 3A-3C) with camera functionality (Fig. 1, image sensor “14A”) that allows for image capture in burst mode, where multiple images are consecutively captured for a single shot (col. 4, lines 51-53). In light of the teaching of Cucci et al., the examiner submits that it would have been obvious to one of ordinary skill in the art before the effective filing date of the instant application to allow the image capture device of Bostick et al., as modified by Yang, to operate in burst mode when an emotion triggers image capture because this would allow the user to select one or more images that he or she specifically likes for long-term storage while deleting those that captured the user in a less desirable manner.
IV. Allowable Subject Matter
Claims 4,5, and 8 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is the examiner’s statement of reasons for the indication of allowable subject matter: As to claim 4, the prior art that addresses communicative coupling between a smart watch and a head-mounted display (HMD) fails to disclose instructing the smart watch to capture an image in addition to an image captured by an HMD when an image capture trigger is sensed by the watch and forgoing instructing the smart watch to capture an image in addition to an image captured by an HMD when an additional image capture trigger is sensed by the watch. Claims 5 and 8 are allowable because they depend on claim 4.
V. Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANTHONY J DANIELS whose telephone number is (571)272-7362. The examiner can normally be reached M-F 9:00 AM - 5:00 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sinh Tran can be reached at 571-272-7564. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/ANTHONY J DANIELS/Primary Examiner, Art Unit 2637
1/4/2026