DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Continued Examination Under 37 CFR 1.114
A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on January 23, 2026 has been entered.
Response to Arguments
Applicant’s arguments with respect to claims 1-20 have been considered but are moot in view of the new grounds of rejection set forth below.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
Claims 1-3 and 5-20 are rejected under 35 U.S.C. 103 as being unpatentable over US PG Pub. 2015/0324636 A1 (hereinafter “Bentley”) in view of U.S. Patent 9,219,901 B2 (hereinafter “Mulholland”), and further in view of U.S. Patent No. 8,934,015 B1 (hereinafter “Chi”), and further in view of U.S. PG Pub. 2019/0088113 A1 (hereinafter “Tanabe”).
Regarding claim 1, Bentley teaches a method (Bentley, Fig. 12) comprising:
receiving motion sensor data from motion sensors coupled to a head-wearable apparatus (Bentley, ¶0026, 0049-0051; “Embodiments of the motion capture sensors…that are related to safety or health monitoring may be coupled with a cap, helmet, and/or mouthpiece or in any other type of enclosure.”);
detecting a trigger event corresponding to the head-wearable apparatus, based on the motion sensor data indicating a decrease from a first speed to a second speed within a predefined amount of time, the decrease exceeding a predefined threshold; in response to detecting the trigger event, capturing images using a camera coupled to the head-wearable apparatus; and transmitting, via a network, the captured images to a client device (Bentley, ¶0106, 0297-0298; “One or more embodiments may generate a set of highlight frames or fail frames from a video, where the highlight frames or fail frames include an activity of interest. The activity of interest may be identified with one or more activity signatures; for example, sensor data may be used to determine when certain motion metrics fall within or outside particular ranges that determine activities of interest…An epic fail representing a crash may be determined by scanning velocity data looking for a sharp transition between a high velocity and a zero or very low velocity…Highlight or fail frames may be transmitted to one or more of a repository, a viewer, a server, a computer, a social media site, a mobile device, a network, and an emergency service.”).
While Bentley further teaches that motion capture data may be used to augment a virtual reality display of a device worn by a user (Bentley, ¶0051-0054), he does not expressly teach the limitations as further claimed, but, in an analogous field of endeavor, Mulholland does as follows.
Mulholland teaches modifying, by at least one processor, a video stream captured by a camera coupled to a head-wearable apparatus to display an augmented reality content item on the head-wearable apparatus (Mulholland, col.4, l.50 – col.5, l.6, col.13, l.60 – col.14, l.21; “Portions of the display of the HMD 120 can be positioned in front of a user's eyes like lenses of a pair of eyeglasses. At least a portion of the display can be transparent, enabling the user 110 to see both the user's surroundings and visual elements shown with the display” wherein “although embodiments are described herein with respect to a see-through HMD, techniques disclosed herein may be extended to other types of HMD, which may utilize cameras to capture images of the user's environment, which are then displayed to the user.”); and
receiving motion sensor data from motion sensors coupled to the head-wearable apparatus while the augmented reality content item is being displayed (Mulholland, col.5, l.7 – col.7., l.6, Fig. 3; “FIG. 3 is an illustration 300 that shows an example of how the HMD 120 may manipulate the displayed visual elements to respond to a movement and/or a first type of triggering event… detected acceleration may trigger manipulation of visual elements, such that playback of a movie is uninterrupted while moving at an approximately constant speed, but detection of an acceleration (change in speed) above a certain threshold may cause the HMD 120 to manipulate playback as described above.”).
Mulholland is considered analogous art because it pertains to a head-wearable apparatus with motion detection capability. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method taught by Bentley to include displaying augmented reality content on a head-wearable display of the user and detecting motion of the user wearing the head-mounted display while the AR content is being displayed, as taught by Mulholland, in order to allow for appropriate alteration of the displayed content in the case of a detected triggering event (Mulholland, col.13, l.60 – col.14, l.21).
The combination of Bentley in view of Mulholland does not expressly teach the limitations as further claimed, but, in an analogous field of endeavor, Chi does as follows.
Chi teaches wherein the camera begins capture in response to detecting the trigger event (Chi, col.3, l.25-60, col.5, l.50-60; “At the onset of…an emergency situation, a user of a wearable computing device can initiate an experience sharing session…when an emergency situation is indicated” wherein “in an experience-sharing session (ESS), a user may share a point-of-view video feed captured by a video camera on a head-mounted display of their wearable computer”, and wherein the shared video feed may be transmitted in real-time upon indication of an emergency situation).
Chi is considered analogous art because it pertains to a wearable computing device for detecting and sharing emergency situations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method taught by the combination of Bentley and Mulholland to include recording and sharing real-time video in response to detection of an emergency situation, as taught by Chi, in order to enable more informative experience sharing from the point of view of the user wearing the device (Chi, col.3, .60-67).
The combination of Bentley in view of Mulholland, and further in view of Chi, does not expressly teach the limitations as further claimed, but, in an analogous field of endeavor, Tanabe does as follows.
Tanabe teaches the predefined threshold being determined for each motion sensor of the motion sensors coupled to the head-wearable apparatus (Tanabe, ¶0153; “If the motion sensor 120 is formed as a sensor unit consisting of two sensors of an accelerometer and a gyro sensor, the control program 9A can use threshold values individually for acceleration and angular velocity as the predetermined threshold values”).
Tanabe is considered analogous art because it pertains to detecting sudden movement changes of a user based on motion sensor data. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method taught by the combination of Bentley, Mulholland and Chi to include setting predefined thresholds for each motion sensor of a device’s motion sensor unit, as taught by Tanabe, in order to more accurately detect when a user is exhibiting sudden acceleration or other change in movement that requires notification to the user (Tanabe, ¶0153-0155).
Regarding claim 2, claim 1 is incorporated, and Bentley in the combination further teaches wherein detecting the trigger event comprises detecting the first speed using inertial measurement unit (IMU) sensors of the motion sensors; and detecting the second speed using the IMU sensors (Bentley, ¶0049; “Another embodiment of the invention may utilize inertial measurement units (IMU) or any other sensors that can produce any combination of orientation, position, velocity and/or acceleration information to the mobile device.”).
Regarding claim 3, claim 1 is incorporated, and Bentley in the combination further teaches validating the trigger event based on image data from the video stream (Bentley, ¶0096; “One or more embodiments may combine a video signature and a sensor data signature to filter out false positives; for example, if an activity matches a sensor data signature but does not match the corresponding video signature, the activity can be classified as a false positive. True events may be determined when both the video signature and the sensor data signature are present.”).
Regarding claim 5, claim 1 is incorporated, and Bentley in the combination further teaches wherein the camera is integrated in a wearable device communicatively coupled to the head-wearable apparatus. (Bentley, ¶0004, 0297; “In one or more embodiments, the video camera 4101 may be attached to the user, and the camera may include the sensor 4102.”).
Regarding claim 6, claim 1 is incorporated, and Chi in the combination further teaches wherein the capturing images further comprises: capturing a second set of images from a second camera facing the user of the head-wearable apparatus (Chi, col.3, l.60-67; “a share could include a first video feed from a forward-facing camera on a head-mounted display (HMD), and a second video feed from a camera on the HMD that is facing inward towards the wearer's face.”).
As established above, Chi is considered analogous art because it pertains to a wearable computing device for detecting and sharing emergency situations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method taught by the combination of Bentley, Mulholland, Chi and Tanabe to include capturing a set of images from a camera facing the user of the head-wearable device, as taught by Chi, in order to enable more informative experience sharing from the point of view of the user wearing the device (Chi, col.3, .60-67).
Regarding claim 7, claim 1 is incorporated, and Chi in the combination further teaches in response to detecting the trigger event, causing a microphone coupled to the head-wearable apparatus to record audio (Chi, col.3, l.25-35; “in an experience-sharing session (ESS), a user may share a point-of-view video feed captured by a video camera on a head-mounted display of their wearable computer, along with a real-time audio feed from a microphone of their wearable computer.”).
Chi is considered analogous art because it pertains to a wearable computing device for detecting and sharing emergency situations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method taught by the combination of Bentley, Mulholland, Chi and Tanabe to include recording audio in response to detection of an emergency situation, as taught by Chi, in order to enable more informative experience sharing from the point of view of the user wearing the device (Chi, col.3, .60-67).
Regarding claim 8, claim 1 is incorporated, and Bentley in the combination further teaches transmitting an alert to a plurality of client devices, the alert comprising the captured images and location data associated with the head-wearable apparatus (Bentley, ¶0106, 0297-0298; “Highlight frames 4130 with overlays 4135 are then distributed over network 4140 to any set of consumers of the highlight frames. In one or more embodiments that generate highlight frames, consumers of highlight frames may include for example, without limitation: any video or image viewing device; repositories for video, images, or data; a computer of any type, such as a server, desktop, laptop, or tablet; any mobile device such as a phone; a social media site; any network; and an emergency service…When a crash is detected, information about the location and severity of the crash may be sent directly to an emergency service, along with video showing the crash.”).
Regarding claim 9, claim 1 is incorporated, and Bentley in the combination further teaches the method further comprising: transmitting location information and status information associated with the head-wearable apparatus (Bentley, ¶0297-0298; “When a crash is detected, information about the location and severity of the crash may be sent directly to an emergency service, along with video showing the crash.”); and
Chi in the combination further teaches the location information and the status information being displayed in a map interface (Chi, col.18, l.5 – col.20, l.5 ; “In some of these additional embodiments, such as discussed above in the context of at least FIGS. 5A and 5B, displaying the location information about the requested party can include displaying a map with the location information about the requested party.”).
As established above with respect to claim 1, Chi is considered analogous art because it pertains to a wearable computing device for detecting and sharing emergency situations. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method taught by the combination of Bentley, Mulholland, Chi and Tanabe to include displaying a map with the location and status information of the head-wearable device, as taught by Chi, in order to enable more informative experience and location sharing for an emergency situation (Chi, col.18, l.5 – col. 20, l.25).
Claim 10 recites a system having features corresponding to the elements recited in method claim 1, the rejection of which is applicable here, and Bentley in the combination further teaches at least one processor and a memory storing instructions executable by the at least one processor (Bentley, ¶0025).
Claim 11 recites a system having features corresponding to the elements recited in method claim 2, the rejection of which is applicable here.
Claim 12 recites a system having features corresponding to the elements recited in method claim 3, the rejection of which is applicable here.
Claim 13 recites a system having features corresponding to the elements recited in method claim 5, the rejection of which is applicable here.
Claim 14 recites a system having features corresponding to the elements recited in method claim 6, the rejection of which is applicable here.
Claim 15 recites a system having features corresponding to the elements recited in method claim 7, the rejection of which is applicable here.
Claim 16 recites a system having features corresponding to the elements recited in method claim 8, the rejection of which is applicable here.
Claim 17 recites a system having features corresponding to the elements recited in method claim 9, the rejection of which is applicable here.
Claim 18 recites a non-transitory computer-readable storage medium including instructions having features corresponding to the elements recited in method claim 1, the rejection of which is applicable here, and Bentley in the combination further teaches a non-transitory computer-readable storage medium (Bentley, ¶0058).
Claim 19 recites a non-transitory computer-readable storage medium including instructions having features corresponding to the elements recited in method claim 2, the rejection of which is applicable here.
Claim 20 recites a non-transitory computer-readable storage medium including instructions having features corresponding to the elements recited in method claim 5, the rejection of which is applicable here.
Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over the combination of Bentley in view of Mulholland, and further in view of Chi, and further in view of Tanabe, as applied to claim 1 above, and further in view of “In the Blink of an Eye – Combining Head Motion and Eye Blink Frequency for Activity Recognition with Google Glass” (hereinafter “Ishimaru”; published 2014).
Regarding claim 4, claim 1 is incorporated, and the combination of Bentley, Mulholland, Chi and Tanabe does not expressly teach the limitations as further claimed, but, in an analogous field of endeavor, Ishimaru does as follows.
Ishimaru teaches wherein detecting the trigger event further comprises: receiving infrared image data from proximity sensors; and comparing a first motion detected using the infrared image data with second motion detected using the motion sensor data (Ishimaru, p.1-4, Introduction - Conclusion; “We have shown how the infrared proximity sensor from the standard Google Glass can be used to acquire user eye blink statistics and how such statistics can be combined with head motion pattern information for the recognition of complex high level activities.”).
Ishimaru is considered analogous art because it pertains to activity recognition based on sensor data of a wearable device. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the method taught by the combination of Bentley, Mulholland, Chi and Tanabe to include correlating data from a proximity sensor with acquired motion sensor data, as taught by Ishimaru, in order to enable more accurate activity recognition (Ishimaru, Conclusion).
Contact Information
Any inquiry concerning this communication or earlier communications from the examiner should be directed to SAMAH A BEG whose telephone number is (571)270-7912. The examiner can normally be reached M-F 9 AM - 5 PM.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, HENOK SHIFERAW can be reached on 571-272-4637. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
/SAMAH A BEG/ Primary Examiner, Art Unit 2676