Prosecution Insights
Last updated: April 19, 2026
Application No. 18/748,643

SYSTEMS AND METHODS FOR HANDLING CONTEXTUAL QUERIES

Non-Final OA §103
Filed
Jun 20, 2024
Examiner
WILSON, KIMBERLY LOVEL
Art Unit
2165
Tech Center
2100 — Computer Architecture & Software
Assignee
Adeia Guides Inc.
OA Round
3 (Non-Final)
71%
Grant Probability
Favorable
3-4
OA Rounds
3y 10m
To Grant
88%
With Interview

Examiner Intelligence

Grants 71% — above average
71%
Career Allow Rate
387 granted / 547 resolved
+15.7% vs TC avg
Strong +18% interview lift
Without
With
+17.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
15 currently pending
Career history
562
Total Applications
across all art units

Statute-Specific Performance

§101
24.6%
-15.4% vs TC avg
§103
40.6%
+0.6% vs TC avg
§102
13.3%
-26.7% vs TC avg
§112
13.0%
-27.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 547 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 13 January 2026 has been entered. Priority It is acknowledged that this application is a continuation of US Application No 17/946,545, filed 9/16/22 now US Patent No 12,056,180 which is a continuation of US Application No 17/253,590 filed 12/17/20 now US Patent No 11,481,429 which is a 371 of PCT/US2018/052929 filed 9/26/18. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 35, 36, 38-45 and 47-52 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No 9,137,308 to Petrou et al (hereafter Petrou) in view of US Patent No 10,623,403 to Gupta et al (hereafter Gupta) in view of US PGPub 2019/0045300 to Cho et al (hereafter Cho) in view of US PGPub 2019/0197510 to Guiney et al (hereafter Guiney). Referring to claim 35, Petrou discloses a method comprising: capturing [recording], automatically at a predetermined interval, a media sample of a media asset at the computing device (see column 5, lines 17-45; column 6, lines 3-5; column 7, lines 44-45; column 9, lines 62-67 – In this example, media stream 600 is recorded by a sensor included in a mobile computing device. Said media stream may include both audio and video data and may be temporarily stored in a region of a memory that is continuously overwritten unless a media data capture event. Capture event manager may be set to continuously or periodically capture media data for a specific interval of time.); analyzing, after the capture [recording], the captured media sample to determine whether the media sample meets at least one criterion [an event to trigger media data capture has occurred (e.g., a loud or unusual sound detected, an object detected similar but different from object included in the history of user experiences, if a person, place or object is recognized via the audio or visual data)] (see column 5, line 17 – column 6, line 2; column 6, lines 3-18; see column 7, lines 53-64 – In response to receiving the sensor data, the mobile computing device analyzes the sensor data to determine if one or more media capture events has occurred. The user of the mobile computing device may define a media capture event – e.g., if a person, place or object is recognized via the audio or visual data.); based on determining that the captured media sample meets the at least one criterion [media capture event has occurred], generating a time identifier [time data] for the captured media sample (see column 5, line 38 – column 6, line 29; column 9, lines 62-67 – In response to the capture of media and based on the preferences set within capture event manager 230, audio/visual search generator 216 prepares the captured media data for transmission to audio/visual based search system 232. In one embodiment, audio/visual search generator 216 generates digital signatures for objects within image data captured by image capture module 222, selects audio samples or generates digital signatures from audio data captured by audio capture module 220, and obtains data from sensor data generators 224, such as location data, time data, acceleration data, etc.); receiving a query [audio/visual based search system may receive the user query from audio/visual based searched client], wherein the query comprises an identifier of a time window (see column 4, lines 25-63 – “Show me the faces of all people that were seen between 1 P.M. and 3 P.M.” “Who were the people at the business lunch this afternoon.” This afternoon and 1 P.M. and 3 P.M. are construed as representing a time window.); determining whether the time identifier for the captured media sample is within the time window received in the query (see column 9, lines 25-41 - Processing logic then transmits the search to an audio/visual based search system (processing block 504). The search is received from the audio visual based search client (processing block 506) and processing logic queries an audio/visual based search history based on the received search (processing block 508). For example, a search might request "Show me all artwork I saw in Vienna?" Processing logic would query the audio/visual based search history for image matches within a user's history for artwork associated with the event of the user being in Vienna, Austria. As another example, a search might request "What was the playlist at the club last night?" Processing logic would query the audio/visual based search history for audio matches that include song titles for an event related to the user being at the location specified in the search (i.e., "the club last night").); based on determining that the time identifier for the captured media sample is within the time window received in the query, identifying the captured media sample based on the time identifier (see column 9, lines 25-41); and based on the identifying the captured media sample, generating a reply to the query (see column 9, lines 42-60). While Petrou discloses capturing a media sample of a media asset at the computing device (see column 2, line 66 – column 3, line 12) Petrou fails to explicitly teach the further limitation wherein the at least one device is outputting the media sample. Gupta teaches a first device playing an acoustic file and a second device recording the acoustic file, wherein the two devices are within proximity of one another (see column 3, lines 13-27) including the limitation of capturing a media sample of a media asset at the computing device [second electronic device], wherein the at least one device [first device] is outputting the captured media sample [acoustic signal] (see column 3, lines 13-27 – Recording, by the second device, the acoustic signal generated by the first device.). Petrou and Gupta are analogous art since they both relate to capturing media files. Petrou teaches recording the media sample from a user’s environment in general. Therefore, Petrou fails to explicitly teach that the media sample is being output by a second device. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to capture the media sample of Petrou from a secondary device playing the media sample as taught by Gupta. One would have been motivated to do so since in order for a first device to be capturing the music that was played at the club last night to respond to the query if Petrou/Gupta (Petrou: see column 9, lines 25-41), a device would need to play the music. While the combination of Petrou and Gupta (hereafter Petrou/Gupta) teaches that the criterion can be a very loud sound (Petrou: see column 5, lines 38-45), Petrou/Gupta fails to explicitly teach the further limitation wherein the at least one criterion is a threshold value. Cho teaches the capturing and recording of sounds by a device, including the further limitations of capturing a media sample [sound] (see [0021] – The lighting fixtures may receive voice commands or other sounds that emanated from within the room and/or voice commands or other sounds that enter the room. For example, each lighting fixture may include a microphone that can receive a sound.); analyzing, after capture, the captured media sample [sound] to determine whether captured media sample meets at least one criterion, wherein the at least one criterion is a threshold value [threshold amplitude] (see [0032] – Each lighting fixture may compare the amplitude of a received sound against a threshold amplitude to determine whether to record and/or transmit the sound.); based on determining that the captured media sample [sound] meets the at least one criterion [threshold], generating a time identifier [timestamp] for the captured media sample (see [0037] – The circuit may timestamp the digital signal generated from the sound/voice received by the microphone when the amplitude of the sound/voice exceeds a threshold as determined, for example, based on the digital signal from the ADC.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to determine that the sound of Petrou/Gupta is loud using the process of Cho. One would have been motivated to do so in order to provide real-time noise cancellation to remove ambient noise thereby increasing the accuracy of detecting an event (Cho: see [0019], lines 1-3; Petrou: see column 5, lines 27-45). While the combination of Petrou/Gupta and Cho (hereafter Petrou/Gupta/Cho) teaches a device and a computing device and the two devices being in proximity of one another (Gupta: see column 3, lines 13-27), Petrou/Gupta fails to explicitly teach the further limitation of detecting at least one device within a near-field communication range of a computing device. Guiney teaches determining the proximity of devices, including the further limitation of detecting at least one device that is socially linked to a computing device is within a near-field communication range of the computing device (see [0074] – First mobile computing device detects second mobile computing device using any appropriate protocol of communication associated with mobile computing devices including near field communication. In additional examples, first mobile computing device detects mobile computing devices within a proximity range and additionally with reward points accounts associated with the list of “Friends” or “Contacts.” The Examiner construes the list of friends or contacts as providing the social link.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to detect the proximity of the two devices of Petrou/Gupta/Cho by utilizing the NFC technologies of Guiney. One would have been motivated to do so since Petrou/Gupta teaches devices being in proximity to one another (Gupta: see column 3, lines 13-15) and Guiney teaches different manners in which the proximity can be determined (Guiney: see [0074], lines 3-9). Referring to claim 36, the combination of Petrou/Gupta/Cho and Guiney (hereafter Petrou/Gupta/Cho/Guiney) teaches the method of claim 35, wherein the threshold value is a threshold sound level, and wherein the determining that the captured media sample meets the at least one criterion further comprises: detecting a sound level [amplitude of the sound/voice] of the captured media sample via a microphone [microphone] of the computing device (Cho: see [0037]); and determining that the sound level of the captured media sample is above the threshold sound level, wherein the time identifier is generated based on the determining that the sound level of the captured media sample is above the threshold sound level (Cho: see [0037] – The circuit may timestamp the digital signal generated from the sound/voice received by the microphone when the amplitude of the sound/voice exceeds a threshold as determined, for example, based on the digital signal from the ADC.). Referring to claim 38, Petrou/Gupta/Cho/Guiney teaches the method of claim 35, further comprising: identifying the media asset of the captured media sample by: generating a media fingerprint based on the captured media sample (Petrou: see column 6, lines 33-60 – digital signature); identifying the media asset based on the media fingerprint (Petrou: see column 6, lines 33-60); and associating the media asset with the captured media sample (Petrou: see column 6, lines 49-60). Referring to claim 39, Petrou/Gupta/Cho/Guiney teaches the method of claim 38, wherein the reply to the query comprises a media asset identifier of the media asset (Petrou: see column 9, lines 42-60). Referring to claim 40, Petrou/Gupta/Cho/Guiney teaches the method of claim 35, further comprising: based on determining that the captured media sample meets the at least one criterion: storing the captured media sample and the time identifier for the media sample in memory (Petrou: see Fig 1 and column 3, lines 47-63). Referring to claim 41, Petrou/Gupta/Cho/Guiney teaches the system of claim 44, wherein the control circuitry is further configured to: based on determining that the captured media sample does not meet the at least one criterion: delete the captured media sample (Petrou: see column 5, lines 17-37 – The sensor data is constantly listened for and is only captured and stored if a criterion is met.). Referring to claim 42, Petrou/Gupta/Cho/Guiney teaches the method of claim 35, wherein the captured media sample of the media asset is captured while the at least one device outputs the captured media sample at a threshold sound level (Petrou: see column 5, lines 40-45 – A capture event trigger can be a loud sound detected.; Cho: see [0032]; [0037] – threshold amplitude). Referring to claim 43, Petrou/Gupta/Cho/Guiney teaches the method of claim 35, wherein the generating the reply to the query further comprises: identifying media asset metadata corresponding to the media asset, wherein the reply includes the media asset metadata (Petrou: see column 9, lines 42-60). Referring to claim 44, Petrou discloses a system comprising: input/output circuitry (see column 13, line 65 – column 14, line 40) configured to: capture [recording], automatically at a predetermined interval, a media sample of a media asset at the computing device (see column 5, lines 17-45; column 6, lines 3-5; column 7, lines 44-45; column 9, lines 62-67 – In this example, media stream 600 is recorded by a sensor included in a mobile computing device. Said media stream may include both audio and video data and may be temporarily stored in a region of a memory that is continuously overwritten unless a media data capture event. Capture event manager may be set to continuously or periodically capture media data for a specific interval of time.); wherein the control circuitry (see column 13, line 65 – column 14, line 40) is further configured to: analyze, after the capture [recording], the captured media sample to determine whether the media sample meets at least one criterion [an event to trigger media data capture has occurred (e.g., a loud or unusual sound detected, an object detected similar but different from object included in the history of user experiences, if a person, place or object is recognized via the audio or visual data)] (see column 5, line 17 – column 6, line 2; column 6, lines 3-18; see column 7, lines 53-64 – In response to receiving the sensor data, the mobile computing device analyzes the sensor data to determine if one or more media capture events has occurred. The user of the mobile computing device may define a media capture event – e.g., if a person, place or object is recognized via the audio or visual data.); based on determining that the captured media sample meets the at least one criterion [media capture event has occurred], generate a time identifier [time data] for the captured media sample (see column 5, line 38 – column 6, line 29; column 9, lines 62-67 – In response to the capture of media and based on the preferences set within capture event manager 230, audio/visual search generator 216 prepares the captured media data for transmission to audio/visual based search system 232. In one embodiment, audio/visual search generator 216 generates digital signatures for objects within image data captured by image capture module 222, selects audio samples or generates digital signatures from audio data captured by audio capture module 220, and obtains data from sensor data generators 224, such as location data, time data, acceleration data, etc.); wherein input/output circuitry (see column 13, line 65 – column 14, line 40) further configured to: receive a query [audio/visual based search system may receive the user query from audio/visual based searched client], wherein the query comprises an identifier of a time window (see column 4, lines 25-63 – “Show me the faces of all people that were seen between 1 P.M. and 3 P.M.” “Who were the people at the business lunch this afternoon.” This afternoon and 1 P.M. and 3 P.M. are construed as representing a time window.); wherein the control circuitry (see column 13, line 65 – column 14, line 40) is further configured to: determine whether the time identifier for the captured media sample is within the time window (see column 9, lines 25-41 - Processing logic then transmits the search to an audio/visual based search system (processing block 504). The search is received from the audio visual based search client (processing block 506) and processing logic queries an audio/visual based search history based on the received search (processing block 508). For example, a search might request "Show me all artwork I saw in Vienna?" Processing logic would query the audio/visual based search history for image matches within a user's history for artwork associated with the event of the user being in Vienna, Austria. As another example, a search might request "What was the playlist at the club last night?" Processing logic would query the audio/visual based search history for audio matches that include song titles for an event related to the user being at the location specified in the search (i.e., "the club last night").); based on determining that the time identifier for the captured media sample is within the time window, identify the captured media sample based on the time identifier (see column 9, lines 25-41); and based on the identifying the media sample, generating a reply to the query (see column 9, lines 42-60). While Petrou discloses capturing a media sample of a media asset at the computing device (see column 2, line 66 – column 3, line 12) Petrou fails to explicitly teach the further limitation wherein the at least one device is outputting the media sample. Gupta teaches a first device playing an acoustic file and a second device recording the acoustic file, wherein the two devices are within proximity of one another (see column 3, lines 13-27) including the limitation of capture a media sample of a media asset at the computing device [second electronic device], wherein the at least one device [first device] is outputting the captured media sample [acoustic signal] (see column 3, lines 13-27 – Recording, by the second device, the acoustic signal generated by the first device.). Petrou and Gupta are analogous art since they both relate to capturing media files. Petrou teaches recording the media sample from a user’s environment in general. Therefore, Petrou fails to explicitly teach that the media sample is being output by a second device. It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to capture the media sample of Petrou from a secondary device playing the media sample as taught by Gupta. One would have been motivated to do so since in order for a first device to be capturing the music that was played at the club last night to respond to the query if Petrou/Gupta (Petrou: see column 9, lines 25-41), a device would need to play the music. While the combination of Petrou and Gupta (hereafter Petrou/Gupta) teaches that the criterion can be a very loud sound (Petrou: see column 5, lines 38-45), Petrou/Gupta fails to explicitly teach the further limitation wherein the at least one criterion is a threshold value. Cho teaches the capturing and recording of sounds by a device, including the further limitations of capture a media sample [sound] (see [0021] – The lighting fixtures may receive voice commands or other sounds that emanated from within the room and/or voice commands or other sounds that enter the room. For example, each lighting fixture may include a microphone that can receive a sound.); analyze, after capture, the captured media sample [sound] to determine whether captured media sample meets at least one criterion, wherein the at least one criterion is a threshold value [threshold amplitude] (see [0032] – Each lighting fixture may compare the amplitude of a received sound against a threshold amplitude to determine whether to record and/or transmit the sound.); based on determining that the captured media sample [sound] meets the at least one criterion [threshold], generate a time identifier [timestamp] for the captured media sample (see [0037] – The circuit may timestamp the digital signal generated from the sound/voice received by the microphone when the amplitude of the sound/voice exceeds a threshold as determined, for example, based on the digital signal from the ADC.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to determine that the sound of Petrou/Gupta is loud using the process of Cho. One would have been motivated to do so in order to provide real-time noise cancellation to remove ambient noise thereby increasing the accuracy of detecting an event (Cho: see [0019], lines 1-3; Petrou: see column 5, lines 27-45). While the combination of Petrou/Gupta and Cho (hereafter Petrou/Gupta/Cho) teaches a device and a computing device and the two devices being in proximity of one another (Gupta: see column 3, lines 13-27), Petrou/Gupta fails to explicitly teach the further limitation of detecting at least one device within a near-field communication range of a computing device. Guiney teaches determining the proximity of devices, including the further limitation of control circuitry configured to: detect at least one device that is socially linked to a computing device is within a near-field communication range of the computing device (see [0074] – First mobile computing device detects second mobile computing device using any appropriate protocol of communication associated with mobile computing devices including near field communication. In additional examples, first mobile computing device detects mobile computing devices within a proximity range and additionally with reward points accounts associated with the list of “Friends” or “Contacts.” The Examiner construes the list of friends or contacts as providing the social link.). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to detect the proximity of the two devices of Petrou/Gupta/Cho by utilizing the NFC technologies of Guiney. One would have been motivated to do so since Petrou/Gupta teaches devices being in proximity to one another (Gupta: see column 3, lines 13-15) and Guiney teaches different manners in which the proximity can be determined (Guiney: see [0074], lines 3-9). Referring to claim 45, Petrou/Gupta/Cho/Guiney teaches the method of claim 44, wherein the threshold value is a threshold sound level, and wherein the determining that the captured media sample meets the at least one criterion further comprises: detecting a sound level [amplitude of the sound/voice] of the captured media sample via a microphone [microphone] of the computing device (Cho: see [0037]); and determining that the sound level of the captured media sample is above the threshold sound level, wherein the time identifier is generated based on the determining that the sound level of the captured media sample is above the threshold sound level (Cho: see [0037] – The circuit may timestamp the digital signal generated from the sound/voice received by the microphone when the amplitude of the sound/voice exceeds a threshold as determined, for example, based on the digital signal from the ADC.). Referring to claim 47, Petrou/Gupta/Cho/Guiney teaches the system of claim 44, wherein the control circuitry is further configured to: identify the media asset of the captured media sample by: generating a media fingerprint based on the captured media sample (Petrou: see column 6, lines 33-60 – digital signature); identifying the media asset based on the media fingerprint (Petrou: see column 6, lines 33-60); and associating the media asset with the captured media sample (Petrou: see column 6, lines 49-60). Referring to claim 48, Petrou/Gupta/Cho/Guiney teaches the system of claim 47, wherein the reply to the query comprises a media asset identifier of the media asset (Petrou: see column 9, lines 42-60). Referring to claim 49, Petrou/Gupta/Cho/Guiney teaches the system of claim 44, wherein the control circuitry is further configured to: based on determining that the captured media sample meets the at least one criterion: store the captured media sample and the time identifier for the captured media sample in memory (Petrou: see Fig 1 and column 3, lines 47-63). Referring to claim 50, Petrou/Gupta/Cho/Guiney teaches the system of claim 44, wherein the control circuitry is further configured to: based on determining that the captured media sample does not meet the at least one criterion: delete the captured media sample (Petrou: see column 5, lines 17-37 – The sensor data is constantly listened for and is only captured and stored if a criterion is met.). Referring to claim 51, Petrou/Gupta/Cho/Guiney teaches the system of claim 44, wherein the captured media sample of the media asset is captured while the at least one device outputs the captured media sample at a threshold sound level (Petrou: see column 5, lines 40-45 – A capture event trigger can be a loud sound detected; Cho: see [0032]; [0037]). Referring to claim 52, Petrou/Gupta/Cho/Guiney teaches the system of claim 44, wherein the control circuitry is further configured to generate the reply to the query by: identifying media asset metadata corresponding to the media asset, wherein the reply includes the media asset metadata (Petrou: see column 9, lines 42-60). Claim(s) 37 and 46 is/are rejected under 35 U.S.C. 103 as being unpatentable over US Patent No 9,137,308 to Petrou et al (hereafter Petrou) in view of US Patent No 10,623,403 to Gupta et al (hereafter Gupta) in view of US PGPub 2019/0045300 to Cho et al (hereafter Cho) in view of US PGPub 2019/0197510 to Guiney et al (hereafter Guiney) as applied to claims 35 and 44 above, and further in view of US PGPub 2016/0247328 to Han et al (hereafter Han). Referring to claims 37 and 46, while Petrou/Gupta/Cho/Guiney teaches a threshold value, Petrou/Gupta/Cho/Guiney fails to explicitly teach the further limitation wherein the threshold value is a threshold media sample size, and wherein the determining that the captured media sample meets the at least one criterion further comprises: detecting a size of the captured media sample; and determining that the size of the captured media sample is above the threshold media sample size, wherein the time identifier is generated based on the determining that the size of the captured media sample is above the threshold media sample size. Han et al teaches capturing media, including the further limitation wherein the threshold value is a threshold media sample size (see [0039]), and wherein the determining that the captured media sample meets the at least one criterion further comprises: detecting a size of the captured media sample (see [0039]); and determining that the size of the captured media sample is above the threshold media sample size (see [0039]), wherein the time identifier is generated based on the determining that the size of the captured media sample is above the threshold media sample size (see [0040]). It would have been obvious to one of ordinary skill in the art prior to the effective filing date of the claimed invention to use the size criterion of Hans the criterion of Petrou/Gupta/Cho/Guiney. One would have been motivated to do so since size is merely another preference of what to capture (Petrou: see column 5, lines 38-66). Response to Arguments With regards to Applicant’s arguments on page 7 of the Remarks that Gray and Gupta fails to teach socially linked devices, the Examiner agrees. A new reference to Guiney is being utilized to teach the limitation. With regards to the arguments concerning Petrou on pages 7 and 8 of the Remarks, while the Examiner agrees that Petrou fails to teach that the criterion is a threshold value, Petrou does teach a criterion and does perform any analysis after the capture based on which determines a next step. As was pointed out previously, the term “capture” encompasses different scopes based on the manner the Specification of Petrou is using the term and in the manner Applicant’s Specification is using the term. It is noted that the Applicant does not provide any support from Applicant’s specification for Applicant’s interpretation of the claimed limitations. Paragraph [0053] of Applicant’s Published Specification states “At block 502, control circuitry 304 determines whether a media sample captured is triggered … Additionally or alternatively, control circuitry 304 may be configured to capture media assets in response to detection of another type of trigger event, such as a sound level above a certain sound pressure level (e.g., in decibels) being captured via microphone 316.” Paragraph [0054] of Applicant’s Published Specification states “At block 506, control circuitry 304 begins capturing media sample 214. For example, control circuitry 304 begins to store in storage 308 an audio sample received by way of microphone 316 … At block 508, control circuitry 304 determines whether to stop capturing media sample 214. For instance, in some aspects, control circuitry 304 may be configured to capture media samples of a certain size and/or duration. The size and/or duration may vary based on media sample type, for example, to ensure a media sample size large enough to enable identification of the media sample. Additionally, or alternatively, control circuitry 304 may be configured to capture media samples as long as a trigger condition remains detected (e.g., as long as a sound level above a certain sound pressure level (e.g., in decibels) remains continuously captured via microphone 316).” With regards to the claimed limitation “analyzing, after the capture, the captured media sample to determine whether the captured media sample meets at least one criterion, wherein the at least one criterion is a threshold value,” the Examiner is utilizing paragraphs [0053] and [0054] as providing support for these limitations. It is noted the threshold value has been incorporated from the dependent claims 36 and 37 and is based on sound level and size. The Specification does not recite the term “threshold.” These paragraphs are the only paragraphs that mention the size or sound level. Petrou states in column 5, lines 17-18 “Capture event manager 230 receives sensor data from sensor generators 224 ….” Column 5, lines 27-38 states “In some embodiments, sensor data generators 224 utilize an audio sensor to capture audio data ….” As is stated in the background of Petrou in column 1, lines 12-5 and is well known to one of ordinary skill in the art, a microphone is an example of an audio sensor. Therefore, based on the paragraphs [0053] and [0054] of Applicant’s Published Specification, Petrou is considered to be capturing a media sample in the same manner as the Applicant using a microphone and listening for a sound. Now with regards to the limitation “analyzing, after the capture, the captured media sample to determine whether the captured media sample meets at least one criterion,” Petrou teaches analyzing, after the capture [recording], the captured media sample to determine whether the media sample meets at least one criterion [an event to trigger media data capture has occurred (e.g., a loud or unusual sound detected, an object detected similar but different from object included in the history of user experiences, if a person, place or object is recognized via the audio or visual data)] (see column 5, line 17 – column 6, line 2; column 6, lines 3-18; see column 7, lines 53-64 – In response to receiving the sensor data, the mobile computing device analyzes the sensor data to determine if one or more media capture events has occurred. The user of the mobile computing device may define a media capture event – e.g., if a person, place or object is recognized via the audio or visual data.). As is stated several times in column 5 of Petrou, Petrou capture event module determines whether a media data capture event has occurred based on received sensor data. The received sensor data is considered to be the captured media sample. That received sensor data is then processed or analyzed after being received/recorded/captured. As is stated in column 5, lines 40-45, Petrou states “For example, a capture event module 230 may analyze sensor data to determine if an audio or visual event is likely to have occurred (e.g., a loud or unusual sound detected, an object detected similar but different from object included in the history of experiences). The examples are construed as representing the criterion. Therefore, Petrou is considered to teach analyzing data after capture based on a criterion. Cho has been utilized to teach the threshold value. With regards to the argument that Petrou does not teach “based on determining that the captured media sample meets the at least one criterion, generating a time identifier for the captured media sample.” Petrou teaches based on determining that the captured media sample meets the at least one criterion [media capture event has occurred], generating a time identifier [time data] for the captured media sample (see column 5, line 38 – column 6, line 29; column 9, lines 62-67 – In response to the capture of media and based on the preferences set within capture event manager 230, audio/visual search generator 216 prepares the captured media data for transmission to audio/visual based search system 232. In one embodiment, audio/visual search generator 216 generates digital signatures for objects within image data captured by image capture module 222, selects audio samples or generates digital signatures from audio data captured by audio capture module 220, and obtains data from sensor data generators 224, such as location data, time data, acceleration data, etc.). As was stated previously, Petrou is creating this time stamp after the recording/capture of the media sample. Therefore, while Petrou and Applicant use the term capture differently, Petrou is still teaching the claimed limitations given the broadest reasonable interpretation of the claimed limitations and in view of the specification. It is also noted that Cho also teaches some of the argued limitations. Cho teaches capturing a media sample [sound] (see [0021] – The lighting fixtures may receive voice commands or other sounds that emanated from within the room and/or voice commands or other sounds that enter the room. For example, each lighting fixture may include a microphone that can receive a sound.); analyzing, after capture, the captured media sample [sound] to determine whether captured media sample meets at least one criterion, wherein the at least one criterion is a threshold value [threshold amplitude] (see [0032] – Each lighting fixture may compare the amplitude of a received sound against a threshold amplitude to determine whether to record and/or transmit the sound.); based on determining that the captured media sample [sound] meets the at least one criterion [threshold], generating a time identifier [timestamp] for the captured media sample (see [0037] – The circuit may timestamp the digital signal generated from the sound/voice received by the microphone when the amplitude of the sound/voice exceeds a threshold as determined, for example, based on the digital signal from the ADC.). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. US PGPub 2010/0269127 to Krug teaches capturing audio over a predetermined time period. Paragraph [0067] recites [0067] Turning to FIG. 3, an exemplary encoded audio signal is illustrated, where the encoded signal is used to determine audience engagement. The audio signal is shown having audio segments (300A-E), where appropriate segments have codes (301A-E) inserted in their respective segments, in accordance with any of the techniques mentioned above. Codes 301A-E should contain sufficient information to identify characteristics of the audio, such as performance and/or artist, station identification, timestamp data, etc. Audio segments (300A-E) are preferably captured on a device using such methods as passive sampling, sampled over a predetermined time period (t), such as 15 seconds. US PGPub 2018/0122404 to Hwang et al teaches one or more mobile devices may monitor and record all audio within a range of a microphone of a device during one or more predetermined times. US Patent 8,265,928 to Kristijansson et al teaches selecting a subset of audio signals. US Patent No 10,038,937 – Annotating audio feeds with metadata identifying a time of capture and geo-coordinates of location. US PGPub 2013/0339436 to Gray et al teaches determining the proximity of devices, including the further limitation of detecting at least one device within a near-field communication range of a computing device (see [0102] - Proximity can be determined by wireless means--for example, using short range communications based on any short range wireless technology such as, but not limited to, BLUETOOTH.RTM., ZIGBEE.RTM., and Near Field Communication (NFC) technologies, or using WIFI, communications between devices, or by other means--for example if a smartphone 102 is connected via USB connection with a desktop personal computer 106 the two devices 102, 106 may be considered in close proximity with each other.). Contact Information Any inquiry concerning this communication or earlier communications from the examiner should be directed to KIMBERLY LOVEL WILSON whose telephone number is (571)272-2750. The examiner can normally be reached 8-4:30. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aleksandr Kerzhner can be reached at 571-270-1760. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /KIMBERLY L WILSON/Primary Examiner, Art Unit 2165
Read full office action

Prosecution Timeline

Jun 20, 2024
Application Filed
Mar 18, 2025
Non-Final Rejection — §103
Jun 20, 2025
Response Filed
Oct 09, 2025
Final Rejection — §103
Jan 13, 2026
Request for Continued Examination
Jan 25, 2026
Response after Non-Final Action
Feb 11, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12591575
IDENTIFICATION OF FEATURE GROUPS IN FEATURE GRAPH DATABASES
2y 5m to grant Granted Mar 31, 2026
Patent 12585659
SUGGESTION ENGINE FOR DATA CENTER MANAGEMENT AND MONITORING CONSOLE
2y 5m to grant Granted Mar 24, 2026
Patent 12585704
RULE-BASED SIDEBAND DATA COLLECTION IN AN INFORMATION HANDLING SYSTEM
2y 5m to grant Granted Mar 24, 2026
Patent 12579183
SYSTEMS AND METHODS FOR MAINTAINING DISTRIBUTED MEDIA CONTENT HISTORY AND PREFERENCES
2y 5m to grant Granted Mar 17, 2026
Patent 12572505
DATA QUERY METHOD AND APPARATUS, DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
71%
Grant Probability
88%
With Interview (+17.6%)
3y 10m
Median Time to Grant
High
PTA Risk
Based on 547 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month