Prosecution Insights
Last updated: April 19, 2026
Application No. 18/931,988

Methods and Systems for Searching Utilizing Acoustical Context

Non-Final OA §103
Filed
Oct 30, 2024
Examiner
PEREZ-ARROYO, RAQUEL
Art Unit
2169
Tech Center
2100 — Computer Architecture & Software
Assignee
The Diablo Canyon Collective LLC
OA Round
3 (Non-Final)
58%
Grant Probability
Moderate
3-4
OA Rounds
3y 5m
To Grant
90%
With Interview

Examiner Intelligence

Grants 58% of resolved cases
58%
Career Allow Rate
171 granted / 296 resolved
+2.8% vs TC avg
Strong +32% interview lift
Without
With
+32.3%
Interview Lift
resolved cases with interview
Typical timeline
3y 5m
Avg Prosecution
28 currently pending
Career history
324
Total Applications
across all art units

Statute-Specific Performance

§101
21.9%
-18.1% vs TC avg
§103
47.6%
+7.6% vs TC avg
§102
8.7%
-31.3% vs TC avg
§112
15.0%
-25.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 296 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on February 6, 2026 has been entered. Response to Amendment This Office Action has been issued in response to Applicant’s Communication of amended application S/N 18/931,988 filed on February 6, 2026. Claims 1, 3, 4, 6 to 12, 14, 15, 17, 18, and 20 to 28 are currently pending with the application. Specification The specification is objected to as failing to provide proper antecedent basis for the claimed subject matter. See 37 CFR 1.75(d)(1) and MPEP § 608.01(o). Correction of the following is required: claims 1, 15, 24, and 29 to 31 recite the limitation “contextual engine”. The specification lacks antecedent basis for the claim terminology, and more specifically, for the term “contextual engine”. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of pre-AIA 35 U.S.C. 103(a) which forms the basis for all obviousness rejections set forth in this Office action: (a) A patent may not be obtained though the invention is not identically disclosed or described as set forth in section 102, if the differences between the subject matter sought to be patented and the prior art are such that the subject matter as a whole would have been obvious at the time the invention was made to a person having ordinary skill in the art to which said subject matter pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1, 4, 6, 7, 9, 12, 15, 18, 22, and 23 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Abbott et al. (U.S. Publication No. 2001/0040590) hereinafter Abbott, and further in view of Weider et al. (U.S. Publication No. 2007/0050191) hereinafter Weider. As to claim 1: Abbott discloses: A device to perform a search, comprising: an environmental sensor [Paragraph 0038 teaches environment sensor input device]; a microphone [Paragraph 0038 teaches user input device including a microphone]; and one or more processors configured to: receive environmental information from the environmental sensor relating to a surrounding environment of the device [Paragraph 0038 teaches receiving information from environment sensor input device; Paragraph 0046 teaches receiving information including sensed environment information; Paragraph 0050 teaches receiving and using information related to the environment surrounding the user], receive acoustic information from an audio stream collected from the microphone [Paragraph 0038 teaches receiving information from user input devices including a microphone; Paragraph 0048 teaches receiving audio information, which is sensed information related to the user, and provided by the microphone]; receive a search request from a user [Paragraph 0033 teaches receiving user questions, such as “What is my current activity?”]; modify the search request, based on contextual information determined from the environmental information and the acoustic information, to obtain a search result [Paragraph 0033 teaches combining context information such as location signals provided by a GPS with other information such as ambient noise signals and video input cues to answer more abstract context questions of the user, such that the abstract questions can be more intelligently answered, in other words, modifying the search request based on the environmental information and the acoustic information to obtain search results]; and provide the search result obtained based on the search request, modified based on the contextual information, to an external device [Paragraph 0038 teaches receiving and processing the various input information and can present information to the user on the various accessible output devices; Paragraph 0074 teaches determining a best response to provide as an output to a user, based on input information including ambient audio attributes, visual elements, environment information, etc., including searching and retrieving the user’s calendar for additional information, and providing an appropriate result or response, i.e., visually indicating the user’s activity level; Paragraph 0170 teaches gathering the relevant information as specified by the layout, including a database query, etc., therefore, generating a search request or query based on the obtained information, to provide search results; Paragraph 0204 teaches providing information to users such as in response to requests or automatically]. Abbott does not appear to expressly disclose a search result from a contextual engine, wherein the search request includes a search term modified by the contextual information to cause the contextual engine to search the search term using the contextual information to produce the search result. Weider discloses: a search result from a contextual engine, wherein the search request includes a search term modified by the contextual information to cause the contextual engine to search the search term using the contextual information to produce the search result [Paragraph 0028 teaches performing required formatting, variable substitutions and transformations to modify the queries; Paragraph 0197 teaches once context and criteria is determined, the question is formed by the parser by filling in required tokens for the grammar of the context, and perform transformations or substitutions of terms based on the contextual information; Paragraph 0200 teaches based on the question, context and parameters or criteria, generating queries to one or more local or external information sources; Paragraph 0201 teaches sending the queries to local or network information sources; Paragraph 0203 teaches obtaining search results]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to incorporate search results from a contextual engine, wherein the search request includes a search term modified by the contextual information to cause the contextual engine to search the search term using the contextual information to produce the search result, as taught by Weider [Paragraphs 0028, 0197, 0200-0203], since both applications are related to improvements on user’s experience, and both have the ability of enhancing a user’s query based on context or additional information, in order to obtain better results; having the ability to obtain search results from a search engine using modified queries including contextual information improves the results provided to the user (See Weider Para [0031]). As to claim 4: Abbott discloses: the microphone is an external microphone of an earpiece utilized to sample an external sound field in the surrounding environment [Paragraph 0047 teaches head-mounted microphone, therefore, a microphone external of an earpiece; Paragraph 0050 teaches the microphone can provide sensed information related to the environment surrounding the user; Paragraph 0073 teaches receiving ambient audio attribute]. As to claim 6: Abbott discloses: the environmental information includes an image or video of the surrounding environment [Paragraph 0038 teaches receiving information from environmental sensor devices including a video camera; Paragraph 0058 teaches receiving input from the microphone and video camera; Paragraph 0060 teaches video input may sample and process frames of video data provided by the video camera; Paragraph 0073 teaches receiving current video attribute]. As to claim 7: Abbott discloses: the search request is refined based on a history of a user [Paragraph 0098 teaches attributes can include user historical data; Paragraph 0163 teaches gathering content, and adjusting the presentation for the user based on user preference or customization data; Paragraph 0154 teaches changing the information based on information about the user]. As to claim 9: Abbott discloses: the environmental information is obtained during a predetermined time leading up to the search request [Paragraph 0082 teaches requesting attributes (which include the environmental and surrounding attributes), where the request includes the name of the attribute being requested and a timeout period]. As to claim 12: Abbott discloses responsive to the search result, collect a further audio stream [Paragraph 0077 teaches further analyzing the ambient audio input attribute to determine whether voices or sounds of nearby people are present, hence, after determining that the user is in a serious condition (search result), collecting further audio stream]. As to claim 15: Abbott discloses: A method for performing a search, comprising: receiving environmental information from an environmental sensor of a device relating to a surrounding environment of the device [Paragraph 0038 teaches receiving information from environment sensor input device; Paragraph 0046 teaches receiving information including sensed environment information; Paragraph 0050 teaches receiving and using information related to the environment surrounding the user], receiving acoustic information from an audio stream collected from a microphone of the device [Paragraph 0038 teaches receiving information from user input devices including a microphone; Paragraph 0048 teaches receiving audio information, which is sensed information related to the user, and provided by the microphone]; receiving a search request from a user [Paragraph 0033 teaches receiving user questions, such as “What is my current activity?”]; modifying the search request, based on contextual information determined from the environmental information and the acoustic information, to obtain a search result [Paragraph 0033 teaches combining context information such as location signals provided by a GPS with other information such as ambient noise signals and video input cues to answer more abstract context questions of the user, such that the abstract questions can be more intelligently answered, in other words, modifying the search request based on the environmental information and the acoustic information to obtain search results]; and providing the search result obtained based on the search request, modified based on the contextual information, to an external device [Paragraph 0038 teaches receiving and processing the various input information and can present information to the user on the various accessible output devices; Paragraph 0074 teaches determining a best response to provide as an output to a user, based on input information including ambient audio attributes, visual elements, environment information, etc., including searching and retrieving the user’s calendar for additional information, and providing an appropriate result or response, i.e., visually indicating the user’s activity level; Paragraph 0170 teaches gathering the relevant information as specified by the layout, including a database query, etc., therefore, generating a search request or query based on the obtained information, to provide search results; Paragraph 0204 teaches providing information to users such as in response to requests or automatically]. Abbott does not appear to expressly disclose a search result from a contextual engine, wherein the search request includes a search term modified by the contextual information to cause the contextual engine to search the search term using the contextual information to produce the search result. Weider discloses: a search result from a contextual engine, wherein the search request includes a search term modified by the contextual information to cause the contextual engine to search the search term using the contextual information to produce the search result [Paragraph 0028 teaches performing required formatting, variable substitutions and transformations to modify the queries; Paragraph 0197 teaches once context and criteria is determined, the question is formed by the parser by filling in required tokens for the grammar of the context, and perform transformations or substitutions of terms based on the contextual information; Paragraph 0200 teaches based on the question, context and parameters or criteria, generating queries to one or more local or external information sources; Paragraph 0201 teaches sending the queries to local or network information sources; Paragraph 0203 teaches obtaining search results]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to incorporate search results from a contextual engine, wherein the search request includes a search term modified by the contextual information to cause the contextual engine to search the search term using the contextual information to produce the search result, as taught by Weider [Paragraphs 0028, 0197, 0200-0203], since both applications are related to improvements on user’s experience, and both have the ability of enhancing a user’s query based on context or additional information, in order to obtain better results; having the ability to obtain search results from a search engine using modified queries including contextual information improves the results provided to the user (See Weider Para [0031]). As to claim 18: Abbott discloses collecting a further audio stream responsive to the search result [Paragraph 0077 teaches further analyzing the ambient audio input attribute to determine whether voices or sounds of nearby people are present, hence, after determining that the user is in a serious condition (search result), collecting further audio stream]. As to claim 22: Abbott discloses: the device is an earpiece worn by a user, and wherein the external device presents the search result to the user [Paragraph 0040 teaches supplying output information to the display; Paragraph 0069 teaches displaying information to the user; Paragraph 0075 teaches earpiece speaker 132; Paragraph 0093 teaches an earpiece speaker, and providing a visual display of information on the display 134]. As to claim 23: Abbott discloses activating a camera to capture an image or video of the surrounding environment responsive to the search result [Paragraph 0074 teaches further analyzing visual elements within a frame of video data or other video input attributes after determining that the user’s current cardiac condition or state shows elevated heart rate attributes]. Claim 3 is rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Abbott et al. (U.S. Publication No. 2001/0040590) hereinafter Abbott, in view of Weider et al. (U.S. Publication No. 2007/0050191) hereinafter Weider, and further in view of Macours (U.S. Publication No. 2011/0150248). As to claim 3: Abbott discloses all the limitations as set forth in the rejections of claim 1 above, but does not appear to expressly disclose the microphone is an internal microphone of an earpiece to be located in an ear canal. Macours discloses: the microphone is an internal microphone of an earpiece to be located in an ear canal [Paragraph 0024 teaches each ear unit includes an internal microphone inside of the enclosure; Paragraph 0043 teaches earpieces can be earphones such as in-ear canal earpieces, where the internal microphone may be provided on the inside of the ear unit in the user’s inner ear]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to incorporate microphone is an internal microphone of an earpiece to be located in an ear canal as taught by Macours [Paragraphs 0024, 0043], since both applications are related to improvements on user’s experience with wearable devices; having the microphone located internally in the device is a simple substitution of one known element for another to obtain predictable results, that is, the audio stream will be collected from the microphone regardless of the microphone being located internally or externally of an earpiece, as recited in the claims as presently presented. Claims 8, 11, 17, 24, and 25 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Abbott et al. (U.S. Publication No. 2001/0040590) hereinafter Abbott, in view of Weider et al. (U.S. Publication No. 2007/0050191) hereinafter Weider, and further in view of Ainslie et al. (U.S. Patent No. 8,326,861) hereinafter Ainslie. As to claim 8: Abbott discloses all the limitations as set forth in the rejections of claim 1 above, but does not appear to expressly disclose the device enables an opt-in for data acquisition. Ainslie discloses: the device enables an opt-in for data acquisition [Column 6, lines 56 to 60 teach the user must opt-in for tracking of the user history data]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to enable an opt-in for data acquisition as taught by Ainslie [Column 6], since both applications are related to improvements on user’s experience with information processing; providing an opt-in to the user enables the protection of the user’s privacy, thereby improving the user’s experience (See Ainslie Columns [5-6]). As to claim 11: Abbott discloses all the limitations as set forth in the rejections of claim 1 above, but does not appear to expressly disclose provide a summary of activities including search information generated during a day. Ainslie discloses: provide a summary of activities including search information generated during a day [Column 6, lines 39 to 52 teaches storing information associated with the user devices, queries submitted, time of query submissions, search results retrieved and displayed, and tracked user’s web browsing activities, etc., and using it to build and update a personal profile tree, therefore, including providing a summary of activities including search information generated during a day]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to provide a summary of activities including search information generated during a day as taught by Ainslie [Column 6], since both applications are related to improvements on user’s experience with information processing; providing a summary of activities including search information generated during a day enables to identify trends and terms that are more likely to reflect the user’s interests, thereby improving the quality and relevancy of search results (See Ainslie Column [3, lines 25 - 31]). As to claim 17: Abbott discloses all the limitations as set forth in the rejections of claim 15 above, but does not appear to expressly disclose providing a summary of a day including search information generated during a day based on the search result. Ainslie discloses: providing a summary of a day including search information generated during a day based on the search result [Column 6, lines 39 to 52 teaches storing information associated with the user devices, queries submitted, time of query submissions, search results retrieved and displayed, and tracked user’s web browsing activities, etc., and using it to build and update a personal profile tree, therefore, including providing a summary of activities including search information generated during a day based on the search result]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to provide a summary of a day including search information generated during a day based on the search result as taught by Ainslie [Column 6], since both applications are related to improvements on user’s experience with information processing; providing a summary of activities including search information generated during a day enables to identify trends and terms that are more likely to reflect the user’s interests, thereby improving the quality and relevancy of search results (See Ainslie Column [3, lines 25 - 31]). As to claim 24: Abbott discloses: A method for performing a search, comprising: receiving environmental information from an environmental sensor of a device relating to a surrounding environment of the device [Paragraph 0038 teaches environment sensor input device; Paragraph 0038 teaches receiving information from environment sensor input device; Paragraph 0046 teaches receiving information including sensed environment information; Paragraph 0050 teaches receiving and using information related to the environment surrounding the user], receive acoustic information from an audio stream collected from a microphone of the device [Paragraph 0038 teaches receiving information from user input devices including a microphone; Paragraph 0048 teaches receiving audio information, which is sensed information related to the user, and provided by the microphone]; receiving a search request from a user [Paragraph 0033 teaches receiving user questions, such as “What is my current activity?”]; modifying the search request, based on contextual information determined from the environmental information and the acoustic information, to obtain a search result [Paragraph 0033 teaches combining context information such as location signals provided by a GPS with other information such as ambient noise signals and video input cues to answer more abstract context questions of the user, such that the abstract questions can be more intelligently answered, in other words, modifying the search request based on the environmental information and the acoustic information to obtain search results]; and provide a summary of activities based on the search request, as modified based on the contextual information [Paragraph 0038 teaches receiving and processing the various input information and can present information to the user on the various accessible output devices; Paragraph 0074 teaches determining a best response to provide as an output to a user, based on input information including ambient audio attributes, visual elements, environment information, etc., including searching and retrieving the user’s calendar for additional information, and providing an appropriate result or response, i.e., visually indicating the user’s activity level; Paragraph 0170 teaches gathering the relevant information as specified by the layout, including as a database query, etc., therefore, generating a search request or query based on the obtained information, to provide search results; Paragraph 0186 teaches displaying a variety of activity-related contextual information; Paragraph 0204 teaches providing information to users such as in response to requests or automatically]. Abbott does not appear to expressly disclose a search result from a contextual engine, wherein the search request includes a search term modified by the contextual information to cause the contextual engine to search the search term using the contextual information to produce the search result; a summary of activities including search information. Weider discloses: a search result from a contextual engine, wherein the search request includes a search term modified by the contextual information to cause the contextual engine to search the search term using the contextual information to produce the search result [Paragraph 0028 teaches performing required formatting, variable substitutions and transformations to modify the queries; Paragraph 0197 teaches once context and criteria is determined, the question is formed by the parser by filling in required tokens for the grammar of the context, and perform transformations or substitutions of terms based on the contextual information; Paragraph 0200 teaches based on the question, context and parameters or criteria, generating queries to one or more local or external information sources; Paragraph 0201 teaches sending the queries to local or network information sources; Paragraph 0203 teaches obtaining search results]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to incorporate search results from a contextual engine, wherein the search request includes a search term modified by the contextual information to cause the contextual engine to search the search term using the contextual information to produce the search result, as taught by Weider [Paragraphs 0028, 0197, 0200-0203], since both applications are related to improvements on user’s experience, and both have the ability of enhancing a user’s query based on context or additional information, in order to obtain better results; having the ability to obtain search results from a search engine using modified queries including contextual information improves the results provided to the user (See Weider Para [0031]). Neither Abbott nor Weider appear to expressly disclose a summary of activities including search information. Ainslie discloses: a summary of activities including search information generated based on the search request [Column 6, lines 39 to 52 teaches storing information associated with the user devices, queries submitted, time of query submissions, search results retrieved and displayed, and tracked user’s web browsing activities, etc., and using it to build and update a personal profile tree, therefore, including providing a summary of activities including search information generated during a day]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to include a summary of activities including search information generated during a day as taught by Ainslie [Column 4, 6], since both applications are related to improvements on user’s experience with information processing; including a summary of activities like search information generated during a day enables to identify trends and terms that are more likely to reflect the user’s interests, thereby improving the quality and relevancy of search results (See Ainslie Column [3, lines 25 - 31]). As to claim 25: Abbott as modified by Ainslie discloses: the summary organizes search information from a current day, a past week, or a past month [Column 6, lines 39 to 52 teaches storing information associated with the user devices, queries submitted, time of query submissions, search results retrieved and displayed, and tracked user’s web browsing activities, etc.; Column 8, lines 53-54 teach search history and web browsing history accumulates over time, in other words, history includes search information from a current day, a past week, or a past month]. Claims 10 and 21 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Abbott et al. (U.S. Publication No. 2001/0040590) hereinafter Abbott, in view of Weider et al. (U.S. Publication No. 2007/0050191) hereinafter Weider, and further in view of Yang (U.S. Publication No. 2010/0105364). As to claim 10: Abbott discloses all the limitations as set forth in the rejections of claim 1 above, but does not appear to expressly disclose the search result is obtained in response to a search request trigger comprising a predetermined key word. Yang discloses: the search result is obtained in response to a search request trigger comprising a predetermined key word [Paragraph 0144 teaches the user may input a preset word or command together with a search word, in order to initiate a search operation, where the search operation can be performed based on a preset voice command “Search”]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to obtain the search result in response to a search request trigger comprising a predetermined key word as taught by Yang [Paragraph 0144], since both applications are related to improvements on user’s experience with information processing; enabling a search operation to be performed upon a search request trigger allows users to operate the devices without needing a high level of skill, improving thereby the user’s experience (See Yang Paragraph [0118]). As to claim 21: Abbott discloses all the limitations as set forth in the rejections of claim 15 above, but does not appear to expressly disclose the search request is based on one or more key words detected in the audio stream. Yang discloses: the search request is based on one or more key words detected in the audio stream [Paragraph 0144 teaches the user may input a preset word or command together with a search word, in order to initiate a search operation, where the search operation can be performed based on a preset voice command “Search”]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to obtain the search request based on one or more key words detected in the audio stream as taught by Yang [Paragraph 0144], since both applications are related to improvements on user’s experience with information processing; enabling a search operation to be performed upon a search request trigger allows users to operate the devices without needing a high level of skill, improving thereby the user’s experience (See Yang Paragraph [0118]). Claim 14 and 20 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Abbott et al. (U.S. Publication No. 2001/0040590) hereinafter Abbott, in view of Weider et al. (U.S. Publication No. 2007/0050191) hereinafter Weider, and further in view of Mikan et al. (U.S. Publication No. 2010/0158213) hereinafter Mikan. As to claim 14: Abbott discloses all the limitations as set forth in the rejections of claim 1 above, but does not appear to expressly disclose highlight a transcription of the audio stream based on a voice activation. Mikan discloses: highlight a transcription of the audio stream based on a voice activation [Paragraph 0035 teaches detecting specific words or commands, and take actions based on the detection, including marking a transcript, such as highlighting the text; Paragraph 0036 teaches detecting certain words within the audio communication, and responsive to detecting the certain words, inserting an indicator into the transcript, or highlighting the transcript]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to highlight a transcription of the audio stream based on a voice activation as taught by Mikan [Paragraphs 0035, 0036], since both applications are related to improvements on user’s experience with processing information; highlighting a transcription of the audio improves the user’s experience by making easier the identification of information of interest to the user (See Mikan Paras [0035], [0046]). As to claim 20: Abbott discloses all the limitations as set forth in the rejections of claim 15 above, but does not appear to expressly disclose highlighting a transcription of the audio stream based on a voice activation from a user. Mikan discloses: highlighting a transcription of the audio stream based on a voice activation from a user [Paragraph 0035 teaches detecting specific words or commands, and take actions based on the detection, including marking a transcript, such as highlighting the text; Paragraph 0036 teaches detecting certain words within the audio communication, and responsive to detecting the certain words, inserting an indicator into the transcript, or highlighting the transcript]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, to highlight a transcription of the audio stream based on a voice activation from a user as taught by Mikan [Paragraphs 0035, 0036], since both applications are related to improvements on user’s experience with processing information; highlighting a transcription of the audio improves the user’s experience by making easier the identification of information of interest to the user (See Mikan Paras [0035], [0046]). Claims 29 to 31 are rejected under pre-AIA 35 U.S.C. 103(a) as being unpatentable over Abbott et al. (U.S. Publication No. 2001/0040590) hereinafter Abbott, in view of Weider et al. (U.S. Publication No. 2007/0050191) hereinafter Weider, and further in view of Master et al. (U.S. Publication No. 2013/0254422) hereinafter Master. As to claim 29: Abbot as modified by Weider discloses: the contextual engine includes 1) a classifier to classify speech sounds and non-speech, 2) a data analyzer to determine the contextual information from the environmental information and the acoustic information, and 3) a search engine to perform the search to produce the search result [Paragraph 0028 teaches determining domain and context, executing the queries or commands in one or more local or network data sources, hence, a search engine to perform searches; Paragraph 0122 teaches classifying and searching speech and non-speech annotations; Paragraph 0016 teaches using location as part of the context for the questions asked, hence, determining context from environmental information; Paragraph 0020 teaches determining context information from a utterance using automatic speech recognition]. Neither Abbot nor Weider appear to expressly disclose classifying speech sounds and non-speech sounds in the acoustic information. Master discloses: classifying speech sounds and non-speech sounds in the acoustic information [Paragraph 0063 teaches classifying sounds, including classifying voice, music, etc.; Paragraph 0066 teaches classifying a user’s query, sounds, music, and speech]. It would have been obvious to one of ordinary skill in the art at the time the invention was made, to combine the teachings of the cited references and modify the teachings of Abbott, by classifying speech sounds and non-speech sounds in the acoustic information, as taught by Master [Paragraph 0063, 0066], since both applications are related to improvements on user’s experience with information processing; classifying words, and non-speech sounds provides the user with a superior user experience (See Master Paragraph [0017]). As to claim 30: Abbot as modified by Weider discloses: the one or more processors are configured to run the contextual engine on the device [Paragraph 0013 teaches the software may be installed in the mobile device; Paragraph 0028 teaches executing the queries or commands in one or more local data sources]. As to claim 31: Abbot as modified by Weider discloses: the one or more processors are configured to access the contextual engine via a server [Paragraph 0013 teaches the software may be installed in a server; Paragraph 0028 teaches executing the queries or commands in one or more network data sources]. Response to Arguments The following is in response to arguments filed on February 6, 2026. Applicant’s arguments have been carefully and respectfully considered, but are moot in view of new grounds of rejections, as necessitated by the amendments. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to RAQUEL PEREZ-ARROYO whose telephone number is (571)272-8969. The examiner can normally be reached Monday - Friday, 8:00am - 5:30pm, Alt Friday, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Sherief Badawi can be reached at 571-272-9782. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RAQUEL PEREZ-ARROYO/Primary Examiner, Art Unit 2169
Read full office action

Prosecution Timeline

Oct 30, 2024
Application Filed
Jan 27, 2025
Response after Non-Final Action
Jun 11, 2025
Non-Final Rejection — §103
Aug 27, 2025
Applicant Interview (Telephonic)
Aug 27, 2025
Examiner Interview Summary
Aug 29, 2025
Response Filed
Dec 23, 2025
Final Rejection — §103
Feb 06, 2026
Request for Continued Examination
Feb 19, 2026
Response after Non-Final Action
Mar 07, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566786
NATURAL LANGUAGE PROCESSING WORKFLOW FOR RESPONDING TO CLIENT QUERIES
2y 5m to grant Granted Mar 03, 2026
Patent 12566726
ENABLING EXCLUSION OF ASSETS IN IMAGE BACKUPS
2y 5m to grant Granted Mar 03, 2026
Patent 12555109
DETERMINISTIC CONCURRENCY CONTROL FOR PRIVATE BLOCKCHAINS
2y 5m to grant Granted Feb 17, 2026
Patent 12547602
LOG ENTRY REPRESENTATION OF DATABASE CATALOG
2y 5m to grant Granted Feb 10, 2026
Patent 12517948
INFORMATION PROCESSING METHOD AND DEVICE FOR SORTING MUSIC IN A PLAYLIST
2y 5m to grant Granted Jan 06, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
58%
Grant Probability
90%
With Interview (+32.3%)
3y 5m
Median Time to Grant
High
PTA Risk
Based on 296 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month