DETAILED ACTION
Notice of Pre-AIA or AIA Status
The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA .
Response to Amendment
This office action is responsive to amendment filed on August 13, 2025. Claims 1, 4, 7, 11 and 20 have been amended. No claims have been canceled or newly added. Claims 1-20 presented for the examination and remain pending in the application.
The Examiner maintains the previous objection to claims 9 and 18 for allowable subject matter. (See the reason on Pages. 29-30).
The previous objection to claim 4 has been withdrawn due to claim amendment.
Claim Rejections - 35 USC § 103
In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status.
The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action:
A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made.
The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action.
The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows:
1. Determining the scope and contents of the prior art.
2. Ascertaining the differences between the prior art and the claims at issue.
3. Resolving the level of ordinary skill in the pertinent art.
4. Considering objective evidence present in the application indicating obviousness or nonobviousness.
This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention.
Claims 1, 7, 11, 17 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Morrison et al. U.S. Pub. No. 2023/0046710 A1 (i.e. relied on PCT/filed 12/14/2020), (hereinafter Morrison) in view of Takayama et al. U.S. Pub. No. 2016/0236778 A1, (hereinafter Takayama) further in view of Levien et al. U.S. Pub. No. 2007/0005651A1, (hereinafter Levien).
Regarding claim 1.
Morrison teaches a head-wearable device (Morrison teaches in Figs 6 &9 element 104 and Para. [0029] and [0070] a head-mountable wearable device 104 (headset)), comprising:
an image sensor (Morrison teaches in Para. [0030] 104 includes at least one capture device 2 (also referred to herein as a sensor), which takes the form of an image capture device(s) (camera in the following examples)); and
one or more programs, wherein the one or more programs are stored in memory and configured to be executed by one or more processors, the one or more programs including instructions that, when executed by the head-wearable device, cause the head-wearable device (Morrison teaches in Para. [0026] the computer device 1 comprises a memory 5, one or more processing units 7, and the one or more I/O ports 8a, 8b. The memory 5 stores one or more computer programs 6 which, when implemented by the processing unit(s) 7,…, and further teaches in Para. [0070] FIG. 6 shows an example of a head-mountable wearable device 104 being used to aid the user 100 and the bystander 102 interact, where the user 100 is visually impaired), to: capture image data using the image sensor on the head-wearable device, the image data describing a local area that includes a bystander (Morrison teaches in Para. [0070] FIG. 6 shows an example of a head-mountable wearable device 104 being used to aid the user 100 and the bystander 102 interact, where the user 100 is visually impaired. The user 100 is wearing the wearable device 104 (i.e., the head-wearable device)…. An example head-mountable wearable device 104 (i.e., the head-wearable device) is described later with reference to FIG. 9), in accordance with a determination that the bystander is included in the image data: wherein the privacy data: includes (Morrison teaches in Para. [0052] the bystander 102 may still be within the field of view of the sensors 2 and, in that event, is being tracked by the tracking module 20 and his information extracted by the extraction function 21 if possible. This is to improve the responsiveness of the system, to the benefit of both the user 100 and any bystanders 104 (i.e., head-wearable device) no bystander information will be shared until he has consented (i.e., not that here the privacy data is protected until the bystander gives the consent), and he will always be notified of that possibility upon entering the notification region and further teaches in Fig. 6 and Para. [0070] FIG. 6 shows an example of a head-mountable wearable device 104 being used to aid the user 100 and the bystander 102 interact (i.e., this indicates how the head-mounted device 104 and 102 bystander communicating), where the user 100 is visually impaired), and indicates one of (i) a first permission status indicating that the user of the-head-wearable device is not allowed to store identifying information of the bystander (note that here the consent is not given by the bystander and thus, the bystander information cannot be stored without the bystander consent. Morrison teaches in Para. [0055] that the bystander 102 has not consented (i.e., not allowed which is a first permission status) to his extracted information being shared…, the sharing function 22 does not share the extracted information with the user 100…, the sharing function 22 outputs only the tracking data to the user 100…, and further teaches in Para. [0076] when the bystander 102 (i.e., note that the bystander 102 is a person who wears a device which can hear the same information as the user 100, or the audio output device 12 notifying the bystander 102 as narrated in Para. [0127]) being tracked in the social region 106, the light 110 is white. While the light 110 is white, no identification information about the bystander 102 is announced to the user 100. The user 100 may be made aware of the relative radial location of the bystander 102 through the use of haptic or audio devices within the wearable device 104 (i.e., the-head- wearable device), but no bystander identity is announced to the user 100 while the light 110 is white. Additionally, see Para. [0060], [0110] and [0115] refusing consent, no extracted information about the bystander 102 is shared) , and (ii) a second permission status indicating that the user of the head-wearable device can obtain permission to store identifying information (Morrison teaches in Para. [0048] consent state module 24 uses the output of the tracking module 20 to determine a consent state of each detected bystander 102. The consent state indicates if the bystander 102 has given his consent for the extracted information about himself to be shared with the user 100. Herein, consent is given “dynamically” and determined through bystander tracking and further teaches in Para. [0110] the wearable device (i.e., the head-wearable device) described above…, the person must consent to his data being shared. Consent is given by the person entering the identity sharing region 208.the person consents to information sharing dynamically. This is the case even when sharing is not dynamic, for example, in recorded systems where, once consent is given, the person cannot unconsent as his identity will be stored with the recorded video. Note that here, the claim lists features in the alternative. While the claim lists a number of optional limitations only one limitation from the list is required and needs to be met by the prior art. However, the prior art of record Morrison addressed the aspects of both limitations “not allowed to store identifying” and “obtain permission to store identifying information” as indicated above)), and in accordance with a determination, based on , and the first permission status, that the bystander is a non-authorizing bystander (Morrison teaches in Para. [0070] FIG. 6 shows an example of a head-mountable wearable device 104 (i.e., the head-wearable device) being used to aid the user 100 and the bystander 102 interact, where the user 100 is visually impaired. The user 100 is wearing the wearable device 104. The wearable device 4 is fitted with sensors 2 in the form of cameras... An example head-mountable wearable device 104 is described later with reference to FIG. 9 and also teaches in Para. [0076] when the bystander 102 (i.e., note that the bystander 102 is a person who wears a device which can hear the same information as the user 100, or the audio output device 12 notifying the bystander 102 as narrated in Para. [0127]) being tracked in the social region 106, the light 110 is white. While the light 110 is white, no identification information about the bystander 102 is announced to the user 100. and further teaches in Patra. [0092] the system does not learn who to announce and who not to announce, but rather allows the bystander 102 to decide each time he is near the user 100 whether he wants to be announced to the user 100 that time (i.e., note that here the first permission and authorization of the bystander determined based on the bystander consent)):
determine a portion of the image data that includes identifying information of the bystander (Morrison teaches in Para. [0025] capture devices 2 capture information relating to a bystander 102. Such a capture device may take the form of a camera such that the captured information is in the form of images or video frames and further teaches in Para. [0034] facial recognition (i.e., a portion of the image data), a facial template or other set of facial features is extracted, which in turn can be matched to other personal information, such as a name of the bystander, using facial recognition techniques as known in the art), wherein identifying information includes visual identification of the bystander (Morrison teaches in Para. [0075] as soon as the bystander enters the social region 106, he is being tracked by the tracking module 20. The bystander 102 is given a visual indication that he is being tracked by the wearable device 104. The light 110 on the wearable device 104 indicates that the bystander is being tracked). Morrison as a whole teaches the interaction (i.e., communication) between head mountable wearable device 104 and bystander as indicated above, Morison does not explicitly teach broadcast a message including an intent to capture the image data including the bystander and responsive to the message receive from a device of the bystander, privacy data, and a degree of connection and in accordance receive, from a device of the bystander.
However, Takayama teaches broadcast a message including an intent to capture the image data including the bystander and responsive to the message receive from a device of the bystander, privacy data, and a degree of connection (Takayama teaches in Para. [0067] a message may be broadcast to electronic devices in a given proximity range of the delivery location, and that message can cause those electronic devices to display or otherwise provide perceptible cues to people (i.e., bystander) associated with those devices via their respective user interfaces and Takayama also teaches in Para. [0066] alternatively include a pixelated display panel on which color and/or black/white messages or images can be rendered and further, Takayama teaches in Para. [0063] alternatively include imaging system(s) that function to capture image data or video from a camera mounted on the payload-release assembly 200 and a degree of connection and in accordance receive, from a device of the bystander (Takayama teaches in Para. [0063] imaging system(s) may include, for example a pair of cameras that can be used to estimate the distance to the ground stereoscopically, for instance, by focusing the two spatially separated cameras on a common ground feature and determining distance based on the angle (i.e., a degree of connection) between the cameras and further the messages may be addressed to people (i.e., bystander) associated with the particular delivery taking place, such as a person that placed an order for the delivery, or may be addressed to individuals based on their proximity to the target delivery location).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of broadcasting a message to the device and determining distance based on the angle (i.e., a degree of connection) between the cameras ([0063] and [0063]) as taught, by Takayama into the teachings of Morrison invention. One would have been motivated to do in order to provide an indication of whether the payload is secured to the payload-release assembly, or disconnected in an efficient manner (Takayama. [0063]).
Morrison in view of Takayama does not explicitly teach modify the portion of the image data that includes the bystander.
However, Levien teaches modify the portion of the image data that includes the bystander (Levien teaches in Para. [0070] at least a portion of the modified media asset is associated with setting content of the modified media asset…, then it may be likely that images of bystanders may have been, modified anonymized, obscured, replaced, or otherwise blurred).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of including at least a portion of the modified media asset is associated with setting content of the modified media asset images ([0070]) as taught, by Levien into the teachings of Morrison in view of Takayama invention. One would have been motivated to do in order to effectively restores the modified media assets. The method improves the former media asset to match the user preference, blocks out certain portions of the former media asset, anonymizes or obscures the identity of a person or other subject of the former media asset.
Regarding claim 7.
Morrison further teaches wherein one or more programs include instructions that, when executed by the head-wearable device, cause the head-wearable device to: identify the image data corresponding to identifying information of the bystander (Morrison teaches in Para. [0026] about one or more programs execution and in Para. [0070] FIG. 6 shows an example of a head-mountable wearable device 104 being used to aid the user 100 and the bystander 102 interact, where the user 100 is visually impaired…, and also teaches in Para. [0034] the sensor signals are also used by an information extraction module 21 to extract information about each of the bystanders 102 in the video stream where possible. With facial recognition, other personal information, such as a name of the bystander…, can also refer to other data obtained using the directly-extracted data (such as a name or other identity data obtained via facial recognition)). Morrison as a whole teaches the facial recognition techniques in Para. [0002]-[0004].
Morrison in view of Takayama does not explicitly teach process the image data, the processed image data representing at least one of a blurred or censored image of the face of the bystander.
However, Levien teaches process the image data, the processed image data representing at least one of a blurred or censored image of a face of the bystander (Levien teaches in Para. [0070] that the recognition logic 112 recognizes that a setting content of the modified media asset 108 is associated with a crime scene photograph, then it may be likely that images of bystanders may have been anonymized, obscured, replaced, blurred, or otherwise modified. Note that here, the claim lists features in the alternative. While the claim lists a number of optional limitations only one limitation from the list is required and needs to be met by the prior art and thus, the prior art of record Morrison addressed the limitation of “a blurred”).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of including at least a portion of the modified media asset is associated with setting content of the modified media asset images in obscured manner ([0070]) as taught, by Levien into the teachings of Morrison in view of Takayama invention. One would have been motivated to do in order to effectively restores the modified media assets. The method improves the former media asset to match the user preference, blocks out certain portions of the former media asset, anonymizes or obscures the identity of a person or other subject of the former media asset.
Regarding claims 11 and 20.
Claims 11 and 20 incorporate substantively all the limitation of claim 1 in a method and a non-transitory computer-readable storage medium form and are rejected under the same rationale. Furthermore, regarding the limitation of a non-transitory computer-readable storage, the prior art Morrison teaches in Para. [0142].
Regarding claim 17.
Claim 17 incorporates substantively all the limitation of claim 7 in a method and form and is rejected under the same rationale.
Claims 2 and 12 are rejected under 35 U.S.C. 103 as being unpatentable over Morrison in view of Takayama further in view of Levien and further in view of Raphael U.S. Pub. No. 2016/0316503 A1, (hereinafter Raphael).
Regarding claim 2. Morrison in view of Takayama further in view of Levien teaches the head-wearable device of claim 1.
Morrison further teaches wherein the one or more programs include instructions that, when executed by the head-wearable device, cause the head-wearable device to: the request requesting permission to store the portion of the image data that includes the bystander for a predetermined duration of time (Morrison teaches in Para. [0026] about the one or more programs and in Para. [0041] the sharing function 22 also receives inputs directly from the information selector 14. The sharing function 22 uses all of the received inputs to determine what information about the bystanders to present to the user 100 and also teaches in Para. [0036] that the bystander is classed as “familiar” if the level of information extracted is sufficient to conclude that the system has encountered the same bystander before (even is no external identity information is currently available), and earlier information extraction results are stored for the purpose of identifying familiar bystanders in the future (i.e., predetermined duration of time) and further teaches in Para. [0064] the set of unfiltered results 30 comprises results 32, each result 32 associated with a stored time 34 at which the result was obtained and a stored confidence value 36. The result 32 is the identity of the bystander 102); and
receive authorization from the device of the bystander to store the portion of the image data that includes the bystander for the predetermined duration of time (Morrison teaches in Para. [0041] the sharing function 22 also receives inputs directly from the information selector 14. The sharing function 22 uses all of the received inputs to determine what information about the bystanders to present to the user 100 and further teaches in Para. [0059] it is determined if the person is in the consent region, i.e. has given his consent to the extracted information being shared with the user 100. If he has consented, sharing is enabled, at step S46,... The bystander 102 is also notified that information sharing is enabled and then teaches in Para. [0036] a bystander is classed as “familiar” if the level of information extracted is sufficient to conclude that the system has encountered the same bystander before (even is no external identity information is currently available), and earlier information extraction results are stored for the purpose of identifying familiar bystanders in the future (i.e., predetermined duration of time)). Morrison teaches about the head-wearable device and the bystander as indicated above.
Levien further teaches obscure the portion of the image data that includes the bystander (Levien teaches in Para. [0028]…, obscuring, enhancing, processing, or replacing portions of the former media asset 101. Further examples of such operations include rotating, scaling, coloring, or substituting portions of the former media asset 101, or altering a contrast, brightness, or other attribute of the former media asset 10,…). But, Morrison in view of Takayama further in view of Levien does not explicitly teach in accordance with a determination, based on the degree of connection between the user of the head-wearable device and the bystander, that the bystander is a temporary authorizing bystander.
However, Raphael teaches in accordance with a determination, based on the degree of connection between the user of the head-wearable device and the bystander, that the bystander is a temporary authorizing bystander (Raphael teaches in Para. [0118]-[0122] these explains the degree of communication and separation between users in a social network. For example, Para. [0118] the “degree of separation” is similar to a degree of separation in a social sense in that a direct detection would be a first degree of separation (i.e., outside of the user's device). A second degree would be a connection identified as a separate device through which a user is in direct communication. Further in Para. [0119] the degrees of separation are applied to forming presence groups. Upon detection a presence group is established and further in Para. [0123] teaches how the server then returns data for each user in the presence group and allows communication between users (i.e., temporary authoring bystander). This data may be provided for all instances or provided in response to a request from the user's mobile either at the request of the user or automatically. (at step 946). See Fig. 9 below:
PNG
media_image1.png
728
488
media_image1.png
Greyscale
transmit a request to the device of the bystander, and in accordance with a determination that the predetermined duration of time has expired (Raphael teaches in Para. [0162] the system will continue to provide other users with a proximity indication, even though the user has left the area. The persistence of the “sustain” function remains until the user's own timeout has expired (i.e., predetermined duration of time has expired), or the other user clears “sustain” users who are not present. It is further possible to provide users with an indication as to whether a persistent “sustain” user remains in the area).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of providing a degree of connection and the system allows communication between users (i.e., bystander devices) and set a time expiration for the proximate users ([0118]-[0123] and [0162]) as taught, by Raphael into the teachings of head mountable wearable device 104 and when the bystander 102 is be within the field of view of the sensors 2 and, in that event, is being tracked by the tracking module 20 ( Fig. 6, [0070] and [0052]) as taught, by Morrison in view of Takayama further in view of Levien invention. One would have been motivated to do so in order to the presence group aggregation system platform allows the system to provide users with the ability to reduce and extend their visible contacts through inherent rings representing degrees of separation in an efficient manner.
Regarding claim 12.
Claim 12 incorporates substantively all the limitation of claim 2 in a method and form and is rejected under the same rationale.
Claims 3 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Morrison in view of Takayama further in view of Levien and further in view of Kurabi et al. U.S. Pub. No. 2015/0332258 A1 (hereinafter Kurabi).
Regarding claim 3. Morrison in view of Takayama further in view of Levien teaches head-wearable device of claim 1.
Morrison further teaches wherein the one or more programs include instructions that, when executed by the head-wearable device, cause the head- wearable device (Morrison teaches in Para. [0026] the computer device 1 comprises a memory 5, one or more processing units 7, and the one or more I/O ports 8a, 8b. The memory 5 stores one or more computer programs 6 which, when implemented by the processing unit(s) 7,…, and further teaches in Para. [0070] FIG. 6 shows an example of a head-mountable wearable device 104 being used to aid the user 100 and the bystander 102 interact, where the user 100 is visually impaired), to: While Morrison teaches a head-mountable wearable device 104 (i.e., the head-wearable device) as indicated above.
Morrison in view of Takayama further in view of Levien does not explicitly receive a first broadcast message from a proximate capturing device of a proximate user, the first broadcast message indicating an intention to capture image data, the first broadcast message including at least one of an identifier of the proximate capturing device or a hashed social networking identifier of the proximate user; and in response to receipt of the first broadcast message: generate a second broadcast message including privacy data associated with the user of the head-wearable device, and transmit the second broadcast message.
However, Kurabi teaches receive a first broadcast message from a proximate capturing device of a proximate user, the first broadcast message indicating an intention to capture image data (Kurabi teaches in Para. [0006] based on proximity to a client device without sending secure information via short-range wireless signaling may include operations for broadcasting, via short-range wireless signals, a first message requesting a peripheral response, receiving, via the short-range wireless signals and further teaches in Para. [0010] the user authentication data may include an image of the user of the client device…), the first broadcast message including at least one of an identifier of the proximate capturing device or a hashed social networking identifier of the proximate user (Kurabi teaches in Para. [0046] broadcast messages may be received by proximate mobile devices and relayed to the server for resolving. When the resolved identifiers are recognized (e.g., match registered user or device identifiers), the server may respond to the proximate mobile devices with messaging indicating that the point-of-sale devices may be trusted for further communications regarding transactions (e.g., connection via Bluetooth link, etc.). Note that here, the claim lists features in the alternative. While the claim lists a number of optional limitations only one limitation from the list is required and needs to be met by the prior art and thus, the prior art of record Kurabi addressed the “broadcasting the first message including identifier of the proximate” ); and
in response to receipt of the first broadcast message: generate a second broadcast message including privacy data associated with the user of the head-wearable device, and transmit the second broadcast message (Morrison provides the user of the head wearable device as indicated above and Kurabi teaches in Para. [0012] in response to broadcasting the second message, establishing, with the second short-range wireless transceiver and further teaches in Para. [0014] the second message broadcast by the point-of-sale device via short-range wireless signals may include a secure identifier of the point-of-sale device, and the first processor of the client device may be configured with processor-executable instructions for performing operations that may further include transmitting to the server via the first wide area network interface a sighting message including the secure identifier of the point-of-sale device in response to receiving the second message).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of including the system of broadcasting the first and the second message and device identifiers ([0046], [0012] and [0014]) as taught, by Kurabi into the teachings of using the head-mountable device 104 (i.e., the head-wearable device) ([0070] and Fig. 9) as taught, by Morrison and in view of Takayama further in view of Levien invention. One would have been motivated to do in order to the client device transmits peripheral response messages when in proximity of a point-of-sale device, thus power is saved and exposure to other devices is reduced. The functioning of the client mobile devices is improved by freeing the client mobile devices from the need to communicate through a wide area network (WAN), thus conserves power once registration of users with the server is complete. The method enables a more secure verification of the client device identity, as fewer sensitive data is delivered to the point-of-sale device from the server.
Regarding claim 13.
Claim 13 incorporates substantively all the limitation of claim 3 in a method and form and is rejected under the same rationale.
Claims 4 and 14 are rejected under 35 U.S.C. 103 as being unpatentable over Morrison in view of Takayama further in view of Levien and further in view of Wexler et al. U.S. Pub. No. 2021/0235207 A1, (hereinafter Wexler).
Regarding claim 4. Morrison in view of Takayama further in view of Levien teaches the head-wearable device of claim 1.
Morrison further teaches wherein the one or more programs include instructions that, when executed by the head-wearable device, cause the head-wearable device (Morrison teaches in Para. [0026] the computer device 1 comprises a memory 5, one or more processing units 7, and the one or more I/O ports 8a, 8b. The memory 5 stores one or more computer programs 6 which, when implemented by the processing unit(s) 7,…, and further teaches in Para. [0070] FIG. 6 shows an example of a head-mountable wearable device 104 being used to aid the user 100 and the bystander 102 interact, where the user 100 is visually impaired), to: determine a relative position of the bystander using the identified audio signal (Morrison teaches in Para. [0076] the user 100 may be made aware of the relative radial location of the bystander 102 through the use of haptic or audio devices within the wearable device 104). Morrison in view of Levien does not explicitly teach determine the position of the bystander using the relative position (Morrison teaches in Para. [0079] the bystander 102 moves along the path from B to C, the light 110 also moves to reflect the bystander's relative radial position with respect to the user 100 from P.sub.1 to P.sub.2 on the wearable device 104 and further teaches in Para. [0075] light is at the relative radial position of the bystander 102 to the user 100, the light 110 being at position Pi when the bystander 102 is at location B). Further, Morrison teaches about an audio sensor on the head-wearable device (Morrison teaches in Fig. 9 elements 12L &12R of 104 a head mountable device and Para. [0099] earpiece or other audio output transducers are to the left and right of the headband 15. These are in the form of a pair of bone conduction audio transducers 12L, 12R functioning as left and right audio channel output speakers).
Morrison in view of Takayama further in view of Levien does not explicitly tach identify an audio signal associated with sound from the bystander in the local area using an audio sensor on the head-wearable device; and determine the position of the bystander using the relative position and global positioning system (GPS) coordinates of the head-wearable device.
However, Wexler teaches identify an audio signal associated with sound from the bystander in the local area using an audio sensor on the head-wearable device (Wexler teaches in Para. [0236] audio signals associated with individuals that are known to user 100 (i.e., bystander device) may be selectively amplified or otherwise conditioned to have priority over unknown individuals. For example, processor 210 may be configured to attenuate or silence audio signals associated with bystanders in the user's environment. For example, the database may be associated with a social network of the user (e.g., Facebook™, LinkedIn™, etc.) and individuals may be prioritized based on their grouping or relationship with the user); and
global positioning system (GPS) coordinates of the head-wearable device (Wexler teaches in Para. [0369] the hearing aid system may access to a global positioning system (GPS) and may determine the location of user 100. For example, the hearing aid system may include a GPS system, or it may communicate with a mobile device (e.g., smartphone, tablet, laptop, etc.) of user 100 that includes a GPS system (or alternative system for determining position of the mobile device, such as Wi-Fi, local network, etc.) to obtain location data (e.g., coordinates of user 100, address of user 100, IP address of the mobile device of user 100, etc. and further Wexler teaches in Para. [0385] using a multiplicity of wearable microphones mounted at different positions on user 100 (i.e., note that here the device is mounted in different position of the user indicates the claimed the head-wearable device)).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of using the system of identifying an audio signal associated to the bystander or user device and coordinating the earpieces to determine the location of user 100 (i.e., bystander) based on GPS and using wearable device which is mounted different positions of the user ([0369], [0193] and [0385]) as taught, by Wexler into the teachings of Morrison in view of Takayama further in view of Levien invention. One would have been motivated to do so in order to the processing representations of images allows apparatus to improve processing efficiency and/or help to preserve battery life. Audio signals corresponding to the voice of user is selectively transmitted to a remote device by amplifying the voice of user and/or attenuating or eliminating altogether sounds other than the voice of user.
Regarding claim 14.
Claim 14 incorporates substantively all the limitation of claim 4 in a method and form and is rejected under the same rationale.
Claims 5 and 15 are rejected under 35 U.S.C. 103 as being unpatentable over Morrison in view of Takayama further in view of Levien and further in view of Obaidi et. al. U.S. Pub .No. 2022/0132202 A1, (hereinafter Obaidi).
Regarding claim 5. Morrison in view of Takayama further in view of Levien teaches the head-wearable device of claim 1.
Morrison further teaches wherein the one or more programs include instructions that, when executed by the head-wearable device, cause the head-wearable device to: capture audio data using an audio sensor on the head-wearable device (Morrison teaches in Para. [0026] about one or more programs execution and Fig. 9 elements 12L &12R of 104 a head mountable device 104 (i.e., head-wearable device) uses an audio transducers (i.e., an audio sensor) to capture audio data and in Para. [0099] earpiece or other audio output transducers are to the left and right of the headband 15. These are in the form of a pair of bone conduction audio transducers 12L, 12R functioning as left and right audio channel output speakers); and
Morrison in view of Levien further teaches obscure identifying information in the plurality of regions of the audio data that includes users of the at least one of the proximate devices in the plurality of regions (Morrison in Para. [0088] the closest bystander 102 may be the bystander 102 who is physically closest to the user 100 in any direction. For example, a bystander who is 0.5 m from the user 100 directly to the right of the user 100 is closer to the user 100 than a bystander 102 who is 1.5 m away from the user 100 but at an angle of 20° to the line of sight 108 and Morrison also provides the user of the head wearable devices as indicated above and additionally, Levien teaches in Para. [0028]…, obscuring, enhancing, processing, or replacing portions of the former media asset 101. Further examples of such operations include rotating, scaling, coloring, or substituting portions of the former media asset 101, or altering a contrast, brightness, or other attribute of the former media asset 10,…).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of including at least a portion of the modified media asset and obscuring, enhancing, processing, or replacing portions of the former media asset ([0028]) as taught, by Levien into the teachings of Morrison in view of Takayama invention. One would have been motivated to do in order to improve the former media asset or otherwise to match a user preference, to block out certain portions of the former media asset, or to anonymize or otherwise obscure an identity of a person or other subject of the former media asset (Levien. [0028]).
Morrison in view of Takayama further in view of Levien does not explicitly teach determine to operate in a private mode; in response to operating in the private mode: request permission from proximate devices to store the audio data associated with users of the proximate devices, the proximate devices within the local area and a personal area network range of the head-wearable device; in response to receiving approvals of the request from the proximate devices, store the audio data associated with users of the proximate devices, and in response to receiving a rejection of the request from at least one of the proximate devices: determine a plurality of regions in the audio data that includes identifying information of users of the at least one of the proximate devices.
However, Obaidi teaches determine to operate in a private mode (Obaidi teaches in Para. [0073] and [0076] the privacy models 240 may be configured to determine when to automatically instruct nearby computing devices to adjust recording functions and also see Para. [0087]-[0089]);
in response to operating in the private mode: request permission from proximate devices to store the audio data associated with users of the proximate devices, the proximate devices within the local area and a personal area network range of the head-wearable device (Obaidi teaches in Figs. 1 & 3 and Para. [0090] the location of where computing devices may be considered nearby (i.e., the proximate devices), and/or the permissions 325. The server receives the request and identifies the nearby computing devices (i.e., the proximate devices)…The server may provide instructions to each of the nearby computing devices (i.e., the proximate devices) to adjust the recording functions for the time specified in the user instruction…The nearby computing device may compare the permissions included in the instructions to the permissions 325 of the nearby computing device and then teaches in Para. [0051] then the privacy management client 166 may not change the recording status. If the recording status is enabled, then the microphone 168 able to detect audio and/or the computing device 112 able to store audio data detected by the microphone 168 and also teaches in Para. [0031] the server 136 may communicate with the building server 138 through the internet, a local area network, a wireless wide area network, or any other similar type of network. Note that "element 166" and "element 355" are both privacy management client element);
in response to receiving approvals of the request from the proximate devices, store the audio data associated with users of the proximate devices (Obaidi teaches in Figs. 1 & 3 and [0043], [0045] and [0090] about the permission of the nearby devices (i.e., the proximate devices) and also teaches in Para. [0051] then the privacy management client 166 may not change the recording status. If the recording status is enabled, then the microphone 168 able to detect audio and/or the computing device 112 able to store audio data detected by the microphone 168), and in response to receiving a rejection of the request from at least one of the proximate devices: determine a plurality of regions in the audio data that includes identifying information of users of the at least one of the proximate devices (Obaidi teaches in Figs. 1 &3 and Para. [0097] the permissions 325 are higher than the permissions of the computing device that originated the request, then the privacy management client 355 rejects the request and provides data indicating the permissions override and further teaches in Para. [0100] the server 136 receives from a first computing device 104, a request 124 to disable an audio, video, or image capture feature of one or more other computing devices in a particular location 180 (410). The user 102 of the first computing device 104 may be prepared to conduct a meeting that includes discussing confidential information and further teaches in Para. [0101] that where the request 124 indicates that the particular location includes a threshold distance from the first computing device 104, the server 136 may identify the area based on combining the threshold distance and the location of the first computing device 104).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of providing a permission to the nearby devices to communicate and to detect audio and/or the computing device able to store audio data detected by the microphone within a local area network ([0090], [0043], [0051] and [0031]) as taught, by Obaidi into the teachings of Morrison in view of Takayama further in view of Levien invention. One would have been motivated to do in order to the method enables utilizing a computing device to provide a user interface to allow a user to select a mode of operation of the computing device, thus allowing the user to interact with the device in an efficient manner. The method allows the user interface of the device to allow the user of the user device to select the mode in an effective manner, so that the user can interact with a display device in a quick and efficient manner, thus reducing power consumption of the display device and improving user experience in an easy manner.
Regarding claim 15.
Claim 15 incorporates substantively all the limitation of claim 5 in a method and form and is rejected under the same rationale.
Claims 6 and 16 are rejected under 35 U.S.C. 103 as being unpatentable over Morrison in view of Takayama further in view of Levien further in view of Obaidi and further in view of Nahman et. al. U.S. Pub .No.2019/0227766 A1, (hereinafter Nahman).
Regarding claim 6. Morrison in view of Takayama further in view of Levien and further in view of Obaidi teaches the head-wearable device of claim 5.
Morrison further teaches wherein the as indicated above.
Morrison in view of Takayama further in view of Levien and further in view of Obaidi does not explicitly teach: determine at least one of an ambient background volume level or a number of people within the local area; and determine to operate in the private mode in response to at least one of the ambient background volume level falling below a threshold volume level or the number of people falling below a threshold number of people.
However, Nahman teaches determine at least one of an ambient background volume level or a number of people within the local area (Nahman teaches in Figs. 1B & 1C and Para. [0030] here determination can be made based on the sound field 120(C) which could be derived from a specific audio source and emitted with a first volume level, while sound field 120(E) could also be derived from that same audio source although be emitted with a second volume level and further teaches in Para. [0052] inertial measurement unit (IMU) configured to measure the motion of user 110 through space, a set of acoustic transducers configured to measure ambient sound (i.e., volume level). Here, the claim lists features in the alternative. While the claim lists a number of optional limitations only one limitation from the list is required and needs to be met by the prior art. However, the prior art of record Nahman addressed the limitation of “emitted with a second volume level and measure ambient sound which is “ambient background volume”); and
determine to operate in the private mode in response to at least one of the ambient background volume level falling below a threshold volume level or the number of people falling below a threshold number of people (Nahman teaches in Para. [0004] when using only one headphone or earbud, any stereo effects are lost, resulting in a much less immersive listening experience. Further, such approaches enable sound to be shared between no more than two people (i.e., people falling below a threshold number of people).Thus, as a general matter, conventional personal sound systems cannot easily be used with multiple listeners and further teaches in Para. [0052] inertial measurement unit (IMU) configured to measure the motion of user 110 through space, a set of acoustic transducers configured to measure ambient sound (i.e., volume level) and also teaches in Para. [0018]-[0020] a private mode, the audio sharing system outputs sound only to the user and may isolate the user from the surrounding acoustic environment…FIGS. 1A-1E illustrate an audio sharing system configured to implement one or more aspects of the present embodiments. As shown in FIG. 1A, audio sharing system 100 is configured to be worn by a user 110. In particular, components of audio sharing system 100 may be coupled with the head 112 and/or shoulders 114 of user 110..., audio sharing system 100 is configured to emit sound to generate a sound field 120(A)…, audio sharing system 100 operates in a “private” mode of operation and therefore generates sound field 120(A) in the immediate proximity of user 110. When operating in private mode, audio sharing system 100 may implement directional sound techniques to direct sound targeting user 110 and to avoid emitting sound that can be perceived by nearby listeners 130(0) and 130(1). Here, the claim lists features in the alternative. While the claim lists a number of optional limitations only one limitation from the list is required and needs to be met by the prior art. However, the prior art of record Nahman addressed both limitations as required).
It would have been obvious to one of ordinary skill in the art before the effective filing date of the claimed invention to modify the teachings of including the conventional personal sound systems that cannot easily be used with multiple listeners ([0004]) as taught, by Nahman into the teachings of Morrison in view of Takayama further in view of Levien and further in view of Obaidi invention. One would have been motivated to do so in order to avoid any loss of stereo effects which are resulting in a much less immersive listening experience in an efficient manner.
Regarding claim 16.
Claim 16 incorporates substantively all the limitation of claim 6 in a method and form and is rejected under the same rationale.
Claims 8 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Morrison in view of Takayama further in view of Levien and further in view of Lacava et. al. U.S. Pub .No. U.S. 2020/0244452 A1, (hereinafter Lacava).
Regarding claim 8. Morrison in view of Takayama further in view of Levien teaches the head-wearable device of claim 1.
Morrison further teaches wherein the one or more programs include instructions that, when executed by the head-wearable device, cause the head- wearable device to: capture audio data using an audio sensor on the head-wearable device (Morrison teaches in Para. [0026] about one or more program execution and in Para. [0070] FIG. 6 shows an example of a head-mountable wearable device 104 ... The cameras are able to capture image data over 180°…, and further teaches in Para. [0098] the wearable device 104 also comprises one or more cameras 2-stereo cameras 2L, 2R mounted on