Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
DETAILED ACTION
Response to Amendment
Applicant’s amendment filed on 10/2/2026 has been entered.
Claim Rejections - 35 USC § 103
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
Claims 1-6, 8, 12-15, 16-20 is/are rejected under 35 U.S.C. 103 as being Alameh et al. (US 10896591) in view of Meisenholder et al. (US 20220373796)
Regarding claim 16, Alameh teaches a device comprising: one or more image sensors (119 sensors in fig. 1);
one or more speakers (col. 10, line 2: such as a loudspeaker); a non-transitory memory (118 in fig. 1) ; and one or more processors (116 in fig. 1) to: receive, from the one or more image sensors, an image of a physical environment having a device field-of-view including a first region within an area of a user field-of-view and a second region outside the area of the user field-of-view(col. 4, lines: 25-28: Accordingly, strategic positioning of the proximity sensor components allows these devices to detect objects approaching from behind);
detect, in the image of the physical environment, an object at a location in the physical environment (col. 4, lines: 25-28);
determine that the location in the physical environment is in the second region outside the area of the user field-of-view (col. 4, lines: 25-28: from behind); and
in response to determining that the location in the physical environment is in the second region outside the area of the user field-of-view, play, via the one or more speakers, an audio notification of the detection (col. 4, lines: 35-38: delivering an alert to a user. The alert can notify the user that someone is behind them).
Alameh does not teach an image of a physical environment having a device field-of-view including a first region within an area of a user field-of-view and a second region outside the area of the user field-of-view.
Meisenholder teaches an image of a physical environment having a device field-of-view including a first region within an area of a user field-of-view and a second region outside the area of the user field-of-view (p0028: the entire wide-angle camera view during capture even for those areas that the user cannot see within the small display FOV…user may move through the entire field of view by moving the user's head to view the entire FOV captured by the camera while wearing the electronic eyewear device).
Alameh and Meisenholder are combinable because they both deal with detect object using sensors. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to combine the teachings of Alameh with the teaching of Meisenholder for purpose of provide augmented reality experiences whereby augmented reality objects are provided in the captured real-world image (p0003).
Regarding claim 19, Alameh teaches the device of claim 16, wherein the one or more processors are to determine a motion of the object and play the audio notification based on the motion of the object (col. 64, Lines:40-45: serious indicating that the person may be in danger because someone is rapidly approaching from behind).
Regarding claim 13, The structural elements of apparatus claim 19 perform all of the steps of method claim 13. Thus, claim 13 is rejected for the same reasons discussed in the rejection of claim 19.
Claim 20 has been analyzed and rejected with regard to claim 16 and in accordance with Alameh’s further teaching on: A computer-readable memory that contains instructions, which when executed by a processor perform steps in a method (col. 4, line 35-40).
Regarding claim 1, The structural elements of apparatus claim 16 perform all of the steps of method claim 1. Thus, claim 1 is rejected for the same reasons discussed in the rejection of claim 16.
Regarding claim 4, Alameh teaches the method of claim 1, wherein determining that the location in the physical environment is outside the area of the user field-of-view includes determining that the location in the physical environment is outside the user field-of-view (col. 64, Lines:40-45: serious indicating that the person may be in danger because someone is rapidly approaching from behind).
Regarding claim 5, Alameh teaches the method of claim 1, wherein determining that the location in the physical environment is outside the area of the user field-of-view includes determining that the location in the physical environment is outside a portion of the user field-of-view (col. 64, Lines:40-45: behind).
Regarding claim 6, Alameh teaches the method of claim 1, wherein determining that the location in the physical environment is outside the area of the user field-of-view includes estimating the area of the user field-of-view (col. 64, Lines:40-45: rapidly approaching from behind).
Regarding claim 14, Alameh teaches the method of claim 1, wherein playing the audio notification includes generating an audio signal indicative of the detection and playing, via the one or more speakers, the audio signal (col. 10, line 2: such as a loudspeaker).
Regarding claim 17, Alameh teaches the device of claim 16, wherein the one or more processors are further to, in response to determining that the location is within the area of the user field-of-view, forgo playing the audio notification (col. 5, Lines: 35-40: proximity sensor component detects an object to the rear of a user, a control operation can include delivering an alert to the user. Alert only for proximity sensor).
Regarding claim 2, The structural elements of apparatus claim 17 perform all of the steps of method claim 2. Thus, claim 2 is rejected for the same reasons discussed in the rejection of claim 17.
Regarding claim 3, Alameh teaches the method of claim 1, wherein detecting the object at the location in the physical environment includes transmitting, to a peripheral device, the image of the physical environment, and receiving, from the peripheral device, an indication of the detection (col. 6 lines 5-10: these devices can be actuated to capture audio and/or video when a person is behind the user).
Regarding claim 18, Alameh teaches the device of claim 16, wherein the one or more processors are to determine an object type of the object and play the audio notification based on the object type (col.15 Lines: 5-10: object 311 is ten feet away, the audible sound 601 may be a light beep. However, when the object is two feet away, the audible sound 601 may be a loud noise).
Regarding claim 8, Alameh teaches the method of claim 1, wherein playing the audio notification includes playing the audio notification spatially from the location (col. 4, lines: 35-38).
Regarding claim 12, The structural elements of apparatus claim 18 perform all of the steps of method claim 12. Thus, claim 12 is rejected for the same reasons discussed in the rejection of claim 18.
Regarding claim 15, Alameh teaches the method of claim 1, wherein playing the audio notification includes altering playback, via the one or more speakers, of an audio stream (col. 15, lines 5-10).
Claims 7, 9 is/are rejected under 35 U.S.C. 103 as being unpatentable over Alameh in view of Meisenholder as applied to claim 16 above, and further in view of Di Censo et al. (US 20160093207).
Regarding claim 7, Alameh in view of Meisenholder does not teach the method of claim 6, wherein estimating the area of the user field-of-view includes determining an area around a gaze location of the user.
Di teaches wherein estimating the area of the user field-of-view includes determining an area around a gaze location of the user (p0022: The pedestrian-facing camera 164 can detect a direction of eye gaze of the pedestrian).
Alameh in view of Meisenholder and Di are combinable because they both deal with detect object using sensors. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to combine the teachings of Alameh with the teaching of Di for purpose of provide system that can provide alerts to a distracted pedestrian related to hazards in the pedestrian's path (abstract).
Regarding claim 9, Alameh in view of Meisenholder and Di teaches the method of claim 8, further comprising: receiving, from the one or more image sensors, a second image of the physical environment having a device field-of-view different than the user field-of-view; detecting (p0019: three cameras 208, 210, and 212 that can capture digital images in front of, in back of, and to the side of the pedestrian), in the second image of the physical environment, the object at a second location in the physical environment (p0024: 208 can be in communication with a processor module 202 to detect hazardous objects) ; and playing, via the one or more speakers, a second audio notification of the detection spatially from the second location (p0024: processor module 202 can output an audible warning through the acoustic transducers 204).
The rational applied to the rejection of claim 7 has been incorporated herein.
Claims 10-11 is/are rejected under 35 U.S.C. 103 as being unpatentable over Alameh in view of Meisenholder as applied to claim 8 above, and further in view of Bevelacqua et al. (US 9547335 ).
Regarding claim 10, Alameh in view of Meisenholder does not teach the method of claim 8, further comprising: receiving, from the one or more image sensors, a second image of the physical environment having a device field-of-view different than the user field-of-view; detecting, in the second image of the physical environment, the object at a second location in the physical environment within the area of the user field-of-view; and forgoing playing, via the one or more speakers, a second audio notification of the detection.
Bevelacqua teaches the method of claim 8, further comprising: receiving, from the one or more image sensors, a second image of the physical environment having a device field-of-view different than the user field-of-view (col. 2, lines: 35-40: Camera 108 may image a field of view similar to what the user may see); detecting, in the second image of the physical environment, the object at a second location in the physical environment within the area of the user field-of-view (col. 2, lines 35-40: interpret objects within the field of view); and forgoing playing, via the one or more speakers, a second audio notification of the detection (col. 2, lines 35-40: could alert the user).
Alameh in view of Meisenholder and Bevelacqua are combinable because they both deal with detect object using sensors. Therefore, it would have been obvious to one of ordinary skill in the art before the effective filing date of the application to combine the teachings of Alameh in view of Meisenholder with the teaching of Bevelacqua for purpose of provide minimizing the increase to the size and form factor of wearable computing devices.
Regarding claim 11, Alameh in view of Meisenholder teaches the method of claim 1, further comprising: detecting, in the image of the physical environment, a second object at a second location in the physical environment; and playing, via the one or more speakers, a second audio notification of the detection of the second object spatially from the second location (col. 4, lines: 35-38).
Response to Arguments
Applicant's arguments with respect to claims have been considered but are moot in view of the new ground(s) of rejection.
Conclusion
Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a).
A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any extension fee pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the date of this final action.
Any inquiry concerning this communication or earlier communications from the examiner should be directed to HELEN Q ZONG whose telephone number is (571)270-1600. The examiner can normally be reached on Mon-Fri 9-6.
Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice.
If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Merouan, Abderrahim can be reached on (571) 270-5254. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300.
Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative or access to the automated information system, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000.
HELEN ZONG
Primary Examiner
Art Unit 2683
/HELEN ZONG/Primary Examiner, Art Unit 2683