Prosecution Insights
Last updated: April 19, 2026
Application No. 18/587,455

COLLABORATIVE SEARCH SESSIONS THROUGH AN AUTOMATED ASSISTANT

Final Rejection §103§112
Filed
Feb 26, 2024
Examiner
MINA, FATIMA P
Art Unit
2159
Tech Center
2100 — Computer Architecture & Software
Assignee
Google LLC
OA Round
4 (Final)
64%
Grant Probability
Moderate
5-6
OA Rounds
4y 2m
To Grant
90%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
259 granted / 402 resolved
+9.4% vs TC avg
Strong +26% interview lift
Without
With
+25.6%
Interview Lift
resolved cases with interview
Typical timeline
4y 2m
Avg Prosecution
27 currently pending
Career history
429
Total Applications
across all art units

Statute-Specific Performance

§101
19.7%
-20.3% vs TC avg
§103
52.6%
+12.6% vs TC avg
§102
12.6%
-27.4% vs TC avg
§112
8.1%
-31.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 402 resolved cases

Office Action

§103 §112
5DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Applicant has added new claims 22 and 23 on 12/05/2025. Response to Arguments Applicant’s arguments, see remarks filed on 12/05/2025, with respect to the rejection(s) of claim(s) 1, 2, 4-8, 10-14, 16-23 under 103 have been fully considered and are persuasive. Therefore, the rejection has been withdrawn. However, upon further consideration, a new ground(s) of rejection is made in view of Rosenberg et al. (US 2018/0316893). Claim Rejections - 35 USC § 112 The following is a quotation of the first paragraph of 35 U.S.C. 112(a): (a) IN GENERAL.—The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor or joint inventor of carrying out the invention. The following is a quotation of the first paragraph of pre-AIA 35 U.S.C. 112: The specification shall contain a written description of the invention, and of the manner and process of making and using it, in such full, clear, concise, and exact terms as to enable any person skilled in the art to which it pertains, or with which it is most nearly connected, to make and use the same, and shall set forth the best mode contemplated by the inventor of carrying out his invention. Claim 22 is rejected under 35 U.S.C. 112(a) or 35 U.S.C. 112 (pre-AIA ), first paragraph, as failing to comply with the written description requirement. The claim(s) contains subject matter which was not described in the specification in such a way as to reasonably convey to one skilled in the relevant art that the inventor or a joint inventor, or for applications subject to pre-AIA 35 U.S.C. 112, the inventor(s), at the time the application was filed, had possession of the claimed invention. Claim 22 recites “wherein automatically adding the second user to the query session is further based on the environmental context indicating that the first query is not provided in a private setting” which is not supported by the specification. Instant specification paragraphs [0052, in a case in which the first query is provided by the first user after the user enters a “private” or “incognito” mode (e.g., by saying, “let’s make this private”), the automated assistant client 120 executing on the client device 110 may determine that the first query is not relevant to the second user], and paragraph [0058, at block 225, the system provides, by the first automated assistant, to the first user, a selectable option to allow the second user of the first client device to join the query session. In implementations, at block 225, in response to determining, at block 215, that the first query is relevant to a second user of the client device 110, the automated assistant client 120 executing on the client device 110 may provide, to the first user, a selectable option to allow the second user of the client device 110 to join the query session. In some implementations, the automated assistant client 120 may provide the selectable option, e.g., by visually rendering the selectable option (e.g., “Do you want to allow User 2 to participate in the search?”) on a user interface of the client device 110, and/or by audibly rendering the selectable option on the client device 110.]. Paragraph [0058] explicitly describes that in response to determining that the query is relevant (determining that the query is not in private setting from paragraph [0052]), the system provides first user with selectable option to join the second user which is not automatic action, first user manually adds second user. Therefore, the claim lacks support. Claim 23 recites “wherein automatically adding the second user to the query session is further based on the predicted interest level of the second user in the query session satisfying a threshold” which is not supported by the specification. Instant specification paragraph [0050, The automated assistant client 120 may further base the determining whether or not the first query is relevant to the second user on the predicted interest level of the second user in the query session satisfying a threshold. In some implementations, the automated assistant client 120 may determine the predicted interest level of the second user in the query session based on a query history of the second user] and paragraph [0058, at block 225, the system provides, by the first automated assistant, to the first user, a selectable option to allow the second user of the first client device to join the query session. In implementations, at block 225, in response to determining, at block 215, that the first query is relevant to a second user of the client device 110, the automated assistant client 120 executing on the client device 110 may provide, to the first user, a selectable option to allow the second user of the client device 110 to join the query session. In some implementations, the automated assistant client 120 may provide the selectable option, e.g., by visually rendering the selectable option (e.g., “Do you want to allow User 2 to participate in the search?”) on a user interface of the client device 110, and/or by audibly rendering the selectable option on the client device 110.]. Paragraph [0058] explicitly describes that in response to determining that the query is relevant (determining predicted interest level satisfying a threshold from paragraph [0050]), the system provides first user with selectable option to join the second user which is not an automatic action, first user manually adds the second user. The claim requires automatically adding users based on the predicted interest level satisfying a user. Therefore, the claim lacks support. The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claim 22 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 22 recites “wherein automatically adding the second user to the query session is further based on the environmental context indicating that the first query is not provided in a private setting” lacks support from the specification, therefore, it is not clear what is meant by the claim. For the purpose of the examination, it is interpreted as adding the users based on the environmental signal that the query is not a private setting. Claim 23 is rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claim 23 recites “wherein automatically adding the second user to the query session is further based on the predicted interest level of the second user in the query session satisfying a threshold” which is not supported by the specification, therefore, it is not clear what is meant by the claim. For the purpose of the examination, it is interpreted as adding the second user based on the predicted interest level of a user satisfying a threshold. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1, 5, 6, 7, 11, 12, 13, 17, 18-21 is/are rejected under 35 U.S.C. 103 as being unpatentable over Juneja et al. (US 2023/0186941) and in view of Rosenberg et al. (US 2018/0316893) and in view of Pearcy (US 2013/0173569) and in view of Kanani et al. (US 2022/0197899). With respect to claim 1, Juneja teaches a method implemented by one or more processors, the method comprising ([0066, different processors (e.g., an Intel Core i5 processor and an Intel Core i7 processor)]; examiner’s note: processors): receiving, from a first user of a first client device, by a first automated assistant executing on the first client device, a first query in a query session ([0048, Device 101 captures each request from first user 110 and second user 120, first user 110 may speak wake word 112 (“Hey Assistant, . . . ”) to activate the virtual assistant on device 101], [0060, What's the weather look like this weekend in Ocean City?” before identifying which Ocean City], [0109, asking a virtual assistant]; examiner’s note: the first user input a query and starts a query session and assistant is an automated assistant executing the query in the user device), wherein the first client device is an automated assistant device, and wherein the first user is located in a physical environment with the first client device when the first user provides the first query ([0002, When a user provides a voice query search as a voice input stream, e.g., in the presence of one or more other people in proximity to the input microphone, there is a chance that the one or more other persons may be speaking (e.g., input) during the input stream of the voice query], [0054, Device 101 captures each request from first user 160 and second user 170. One or more of wake word 162, request 164, and supplemental request 172 may be captured as an input stream, e.g., to be processed by a virtual assistant]; examiner’s note: the user is withing the proximity within the client device when providing the first query); determining an identity of the first user based on a voice of the first user detected by a microphone of the first client device or based on a face of the first user detected by a camera of the first client device ([0003, Voice identification may use voice fingerprinting, e.g., a mathematical expression of a person's voice or vocal tract, to identify a user making a voice query]; [0021, the virtual assistant may identify that the third voice is actually the first voice and the corresponding queries should be combined]; examiner’s note: determining an identify of the first user based on a voice of the first user and the voice is detected by a microphone); providing, by the first automated assistant, to the first user, a first set of search results for the first query ([0017, in response to determining the relevance score fails to meet or exceed the predetermined threshold, the virtual assistant may label the secondary query as an interruption of the input stream, and provide the first results]; [0123, the top result may be read aloud by the virtual assistant. In some embodiments, one or more of the search results may be provided via an interface for the virtual assistant and/or another connected device]; examiner’s note: the automated assistant displays the first search results to the first user), wherein providing the first set of search results comprises the first automated assistant causing the first set of search results to be provided on a display of the mobile device of the first user ([0046, Device 101 may be any computing device providing a user interface, such as a voice assistant, a virtual assistant, and/or a voice interface allowing for voice-based communication with a user and/or via an electronic content display system for a user. Examples of such computing devices are a smart home assistant similar to a Google Home® device or an Amazon® Alexa® or Echo® device, a smartphone or laptop computer with a voice interface application for receiving and broadcasting information in voice format], displaying search results to users mobile devices includes first and second user mobile devices); receiving, by the first automated assistant, from a second user of the first client device, prior to the second user being added to the query session, additional input to refine the first query ([0060, supplemental request 192 may be captured as an input stream, e.g., to be processed by a virtual assistant, second user 190 offers a supplemental request 192, saying, “ . . . New Jersey.”]; examiner’s note: the second user input a second query before the added to the query session and the additional input from the second user is received by the automated assistant), wherein the second user is located in the physical environment with the first client device when the second user provides the first query ([0007, a second person present in a same room, perhaps a little farther away from the microphone may speak the word “car.”], [0136, the voice engine may compare loudness and/or amplitude to determine if the first voice input and the second voice input came from a similar distance from the microphone prior to analyzing other voice traits]; examiner’s note: the second user is located in the physical environment with the first client device when they provide the first query); in response to receiving, from the second user of the first client device, the additional input to refine the first query ([0057, In scenario 150, device 101 makes listen decision 174, e.g., to accept supplemental request 172. Listen decision 174 depicts a determination to listen to supplemental request 172 from second user 170. In scenario 150, device 101 issues virtual assistant response 176, saying, “OK. Now playing “Jump” by Van Halen,” and begins to playback the song, also demonstrating that supplemental request 172 was incorporated], examiner’s note: the supplemental request to refine the first query by the second user): determining an identity of the second user based on a voice of the second user detected by the microphone of the first client device or based on a face of the second user detected by the camera of the first client device (fig. 8A, [0002, in the presence of one or more other people in proximity to the input microphone], [0003, many voice assistants may identify a user interacting with them via voice identification using voice profiles. Voice identification may use voice fingerprinting, e.g., a mathematical expression of a person's voice or vocal tract, to identify a user making a voice query.], [0009, a profile ID of the person conducting the search is used by the automatic speech recognition module in order to determine which words to pass to the NLP algorithm], [0061, combining and/or setting aside voice inputs for a voice query based on identifying voices.], [0050, the virtual assistant of device 101 may identify that the voice input(s) by first user 110 and second user 120 are not from the same source.]; examiner’s note: identifying users voices (includes first and second user) detected by a microphone); generating, based on the additional input received from the second user, a modified set of search results ([0124, the voice engine provides the new search result(s) based on the first query and the supplement]; examiner’s note: generating a new search result (modified search result) based on the input of a second user); providing, by the first automated assistant, to the first user , the modified set of search results ([0113, one or more of the new search results may be provided via an interface for the virtual assistant and/or another connected device]; examiner’s note: displaying results to the users), providing, by the first automated assistant, to the first user , the modified set of search results, wherein providing the modified set of search results comprises the first automated assistant causing the modified set of search results to be provided on the display of the mobile device of the first user a Juneja does not explicitly teach identifying a mobile device of the first user based on the identity of the first user; subsequent to providing, to the first user, the first set of search results for the first query: receiving, identifying a mobile device of the second user based on the identity of the second user; automatically adding the second user to the query session However, Rosenberg teaches identifying a mobile device of the first user based on the identity of the first user; identifying a mobile device of the second user based on the identity of the second user ([0017, For example the collaboration service can be aware of the video conferencing endpoint in the conference room, and can learn the identifiers of any portable devices present in the conference room. The collaboration service can also learn of identities (e.g., user identities, user names, account identities, etc.)], identifying mobile devices by the identifiers of users); automatically adding the second user to the query session based on the determined identity of the second user and without receiving a separate request from the second user to join the query session, by adding the mobile device of the second user to the query session ([0051, with conference room device 134 in the conference room 130 portable devices 142 automatically join to the conference without any interaction from operators of portable devices 142. In some embodiments, the automatically joining portable devices to the conference can occur at any time during the conference. Thus, a conference attendee joining the conference late will be automatically joined into the conference as soon as the attendee's device completes pairing with the conference room device], fig. 2, [0037, In addition to automatically joining (308) portable devices 142 into the conference, collaboration service 120 can add (310) the identities associated with the portable devices 142 to a conference roster of the conference], automatically joining users (second users) in the conference room without additional request from the second user by verifying the identity of the users and adding the second users mobile devices in the collaborative sessions (query session) and additional input is taught by Juneja in para. [0011, where the first person utters “what's the name of the movie that has Michelle Pfeiffer” and a second user completes the search query by uttering “and Tony Montana.”]). One of ordinary skill in the art would recognize that incorporating features of Rosenberg i.e. identifying users mobile devices and automatically joining the users to the query session without any request of Rosenberg into Juneja to identify and add users without a request. Juneja, Rosenberg are analogous art because all the art teaches collaborative sessions. Therefore, it would have been obvious to one of the ordinary skills in the art before the elective filing date to incorporate the features of Rosenberg into Juneja make the system more efficient. The motivation would be to improve collaboration and enhancing efficiency and user experience during a collaborative session (Rosenberg, [0014, The present technology improves limitations of traditional videoconferencing systems]). Junea and Rosenberg do not explicitly teach providing, However, Pearcy teaches providing, , to the the second user, the modified set of search results, wherein the modified set of search results are provided to the second user based on adding the second user to the query session, wherein providing the modified set of search results comprises the causing the modified set of search results to be provided on the display of the mobile device of the and on a display of the mobile device of the second user ([0019, a particular search or search query, allowing each of the collaborating users to adjust one or more criteria of the search query in order to explore different result sets in parallel before subsequently sharing both their results and their modified search terms with the other collaborating device], [0056, the search result set can be caused 720 to be presented on each of the respective participating computing devices allowing the participants to collaborate further regarding the development of the search results returned in the collaborative search session], fig. 7; examiner’s note: the modified search results are presented to each of the users devices which includes mobile devices; automated assistant is taught by Juneja in paragraph, [0016, automated speech recognition]; therefore, Pearcy in combination with Juneja teaches the limitation); the results are displayed on the mobile device of each of the users after they are added to the query sessions). One of ordinary skill in the art would recognize that incorporating features of Pearcy i.e. displaying modified search results in the mobile device of the second user into the invention of Juneja/Rosenberg to displaying search results/modified search results to multiple users mobile devices. Juneja, Rosenberg, Pearcy are analogous art because all the art teaches collaborative sessions. Therefore, it would have been obvious to one of the ordinary skills in the art before the elective filing date to incorporate the features of Pearcy into Juneja/Rosenberg to make the system more efficient. The motivation would be to identify mobile devices of users to display search results to both users so that the users can have personalized search results to view the search results in a secure and organized manner (Pearcy, [0030, coordinate which users (i.e., which other computing devices) are invited or otherwise authorized to join and participate in a given collaborative search session], [0051, it can allow users to filter, sort, organize, and analyze the result set in different ways (i.e., according to and consistent with the varying contexts employed]). Juneja, Rosenberg, Pearcy do not explicitly teach subsequent to providing, to the first user, the first set of search results for the first query: receiving, However, Kanani teaches subsequent to providing, to the first user, the first set of search results for the first query: receiving, from a second user additional input to refine the first query (fig. 5, element 535, 540, 545, 550,555 [0060, The process 500 may include an operation 535 of presenting, via the user interface, a first visualization of the first query results. The first visualization includes information that may be used for diagnosing the first performance problem]; [0061, The process 500 may include an operation 540 of receiving, via the user interface, a second user input actuating a respective indicator of the one or more first indicators]; examiner’s note: the first query results are presented to the user to refine the first query, the second user refine the first query; Junea teaches automated assistant and the users are located in the same physical environment in paragraphs [0060]); One of ordinary skill in the art would recognize that incorporating features of Kanani i.e. receiving an additional query from the second user subsequent to providing to the first user first set of search results into Juneja/Pearcy/Rosenberg to refining the first query subsequent to providing the search results to first user. Juneja, Rosenberg, Pearcy, Kanani are analogous art because all the art teaches searching and displaying search results. Therefore, it would have been obvious to one of the ordinary skills in the art before the elective filing date to incorporate the features of Kanani into Juneja/Pearcy/ Rosenberg to make the system more efficient. The motivation would be to improve collaboration and reducing redundant queries and enhancing efficiency and user experience (Kanani, [0053, to efficiently identify the root cause of performance problems experience by the communication and collaboration platform 110], [0022, The query building user interface may be configured to allow administrators to quickly and efficiently formulate queries on the performance data stored in the performance information datastore 125 and to generate visualizations of the data]). With respect to claim 5, Juneja, Rosenberg, Pearcy, Kanani in combination teach the method according to claim 1, Juneja teaches automated assistant ([0109, asking a virtual assistant]) but does not explicitly teach adding the second user to the query session in response to receiving the additional input comprises adding the mobile device of the second user to the query session; and providing the modified set of search results to the second user comprises the first automated assistant causing the modified set of search results to be provided on a display of the mobile device of the second user based on adding the mobile device of the second user to the query session. However, Pearcy teaches adding the second user to the query session in response to receiving the additional input comprises adding the mobile device of the second user to the query session ([0023, Computing devices 105, 110, 115, 120 can include traditional and mobile computing devices, including personal computers, laptop computers, tablet computers, smartphones, personal digital assistants, feature phones, handheld video game consoles, desktop computers], [0030, a device coordinator 260a, 260b can be provided to coordinate which users (i.e., which other computing devices) are invited or otherwise authorized to join and participate in a given collaborative search session]; examiner’s note: mobile devices of collaborative users are added to the collaborative sessions); and providing the modified set of search results to the second user comprises the first automated assistant causing the modified set of search results to be provided on a display of the mobile device of the second user based on adding the mobile device of the second user to the query session ([0030, a device coordinator 260a, 260b can be provided to coordinate which users (i.e., which other computing devices) are invited or otherwise authorized to join and participate in a given collaborative search session], [0056, the search result set can be caused 720 to be presented on each of the respective participating computing devices allowing the participants to collaborate further regarding the development of the search results returned in the collaborative search session], fig. 7; examiner’s note: the modified search results are present to each of the users devices when the other users join the collaborative sessions; automated assistant is taught by Juneja in paragraph, [0016, automated speech recognition]; therefore, Pearcy in combination with Juneja teaches the limitation). One of ordinary skill in the art would recognize that the features of adding the users mobile devices and displaying modified search results to the first user and second user of Pearcy into the invention of Juneja/Rosenberg/Kanani to display modified search results to multiple users mobile devices. Therefore, it would have been obvious to one of the ordinary skills in the art before the elective filing date to incorporate the features of Pearcy into Juneja/Rosenberg/Kanani to add the mobile devices of the users into the query sessions and display modified search results to first and second users. Juneja, Pearcy, Kanani are analogous art because all the art teaches searching and displaying search results. The motivation would be to display search results to both users so that the users can have personalized search results to view the search results in a secure and organized manner ([0030, coordinate which users (i.e., which other computing devices) are invited or otherwise authorized to join and participate in a given collaborative search session], [0051, it can allow users to filter, sort, organize, and analyze the result set in different ways (i.e., according to and consistent with the varying contexts employed]). With respect to claim 6, Juneja, Rosenberg, Pearcy, Kanani in combination teach the method according to claim 1, Juneja further teaches wherein: adding the second user to the query session in response to receiving the additional input comprises adding the user account of the second user to the query session ([0009, a profile ID of the person conducting the search is used by the automatic speech recognition module in order to determine which words to pass to the NLP algorithm], [0054, first user 160 may offer confirmation, e.g., by repeating “Van Halen” or saying, “Yes]; examiner’s note: users voice profile is added to the query session when the first user confirm the profile); and providing the modified set of search results is based on subsequently detecting the second user based on the voice of the second user or based on the face of the second user (fig. 7B; [0113, At step 769, the voice engine provides the new search result(s) based on the first query and the supplement. one or more of the new search results may be provided via an interface for the virtual assistant and/or another connected device], [0114, There are many ways to determine whether to include a supplement from a second voice input]; examiner’s note: the new search results (modified search results) are provided to the users (includes second user) based on detected voice of the second user). Juneja does not explicitly teach providing the modified set of search results to the second user. However, Pearcy teaches providing the modified set of search results to the second user ([0019, a particular search or search query, allowing each of the collaborating users to adjust one or more criteria of the search query in order to explore different result sets in parallel before subsequently sharing both their results and their modified search terms with the other collaborating device], [0056, the search result set can be caused 720 to be presented on each of the respective participating computing devices allowing the participants to collaborate further regarding the development of the search results returned in the collaborative search session], fig. 7; examiner’s note: the modified search results are presented to each of the users devices which includes mobile devices; automated assistant is taught by Juneja in paragraph, [0016, automated speech recognition]; therefore, Pearcy in combination with Juneja teaches the limitation); the results are displayed on the mobile device of each of the users after they are added to the query sessions). One of ordinary skill in the art would recognize that incorporating features of Pearcy i.e. displaying modified search results in the mobile device of the second user into the invention of Juneja/Rosenberg to displaying search results/modified search results to multiple users mobile devices. Juneja, Rosenberg, Pearcy are analogous art because all the art teaches collaborative sessions. Therefore, it would have been obvious to one of the ordinary skills in the art before the elective filing date to incorporate the features of Pearcy into Juneja/Rosenberg to make the system more efficient. The motivation would be to identify mobile devices of users to display search results to both users so that the users can have personalized search results to view the search results in a secure and organized manner (Pearcy, [0030, coordinate which users (i.e., which other computing devices) are invited or otherwise authorized to join and participate in a given collaborative search session], [0051, it can allow users to filter, sort, organize, and analyze the result set in different ways (i.e., according to and consistent with the varying contexts employed]). Claim 7 encompasses the same scope of limitation of claim 1, in additions of a computer program product comprising one or more non-transitory computer-readable storage media having program instructions collectively stored on the one or more computer-readable storage media ([0068, Memory may be an electronic storage device provided as storage]). Therefore, claim 7 is rejected on the same basis of rejection of claim 1. Claim 11 is rejected on the same basis of rejection of claim 5. Claim 12 is rejected on the same basis of rejection of claim 6. Claim 13 encompasses the same scope of limitation of claim 1, in additions of a processor, a computer-readable memory, one or more non-transitory computer-readable storage media and program instructions collectively stored on the one or more computer-readable storage media ([0068, Memory may be an electronic storage device provided as storage]). Therefore, claim 13 is rejected on the same basis of rejection of claim 1. Claim 17 is rejected on the same basis of rejection of claim 5. Claim 18 is rejected on the same basis of rejection of claim 6. With respect to claim 19, Juneja, Pearcy, Kanani in combination teach the method according to claim 1, Juneja teaches wherein the first query is a spoken utterance provided by the first user, and wherein the additional input to refine the first query is an additional spoken utterance provided by the second user ([0084, there might be a slight pause between two utterances by a first user that were intended to be one statement or query submitted to a voice assistant], [0093, request 114 may be identified as spoken by first user 110 and, e.g., first user 110 may be assigned as the first profile]; examiner’s note: the query is uttered by users). Claim 20 is rejected on the same basis of rejection of claim 19. Claim 21 is rejected on the same basis of rejection of claim 19. Claim(s) 2, 8, 14 is/are rejected under 35 U.S.C. 103 as being unpatentable over Juneja et al. (US 2023/0186941) and in view of Rosenberg et al. (US 2018/0316893) and in view of Pearcy (US 2013/0173569) and in view of Kanani et al. (US 2022/0197899) and in view of Hertschuh et al. (US 2016/0350825). With respect to claim 2, Juneja, Rosenberg, Pearcy, Kanani in combination teach the method according to claim 1, but do not in combination explicitly teach wherein: the query session is a shopping session; the first set of search results comprises a first set of products; and the modified set of search results comprises a modified set of products. However, Hertschuh teaches wherein: the query session is a shopping session ([0030, the assisted shopping application 155 and assisted shopping agent application 121 can allow the assisted shopping]; examiner’s note: shopping session); the first set of search results comprises a first set of products ([0031, a new search for products in the product catalog]; examiner’s note: the query session includes a product); and the modified set of search results comprises a modified set of products ([0031, The CS agent application 154 can also allow the CS agent to refine search terms corresponding to the speech input and generate a new search for products in the product catalog 1]; examiner’s note: the refined search results include modified sets of products also teaches in fig. 5). One of ordinary skill in the art would recognize incorporating the features of Hertschuh that query session is a shopping session and modified results including products into the invention of Juneja/Pearcy to multiple types of search terms and query sessions. Juneja, Rosenberg, Pearcy, Kanani and Hertschuh are analogous art because all of them teaches searching data. Therefore, it would have been obvious to one of the ordinary skills in the art before the elective filing date to incorporate the features of Hertschuh into Juneja/ Rosenberg/Pearcy/Kanani to have system which will provide multiple types of search sessions. The motivation would be to have multiple types of query sessions which will have shopping sessions and results will have products and modified product results so that the users in the sessions can have recommendations to products to save users time ([0024, the assisted shopping application 155 that allow an assisted shopping experience to take place between a user and CS agent]). Claim 8 is rejected on the same basis of rejection of claim 2. Claim 14 is rejected on the same basis of rejection of claim 2. Claim(s) 4, 10, 16 is/are rejected under 35 U.S.C. 103 as being unpatentable over Juneja et al. (US 2023/0186941) and in view of Rosenberg et al. (US 2018/0316893) and in view of Pearcy (US 2013/0173569) and in view of Kanani et al. (US 2022/0197899) and in view of Goldstein et al. (US 2017/0134335). With respect to claim 4, Juneja, Rosenberg, Pearcy, Kanani in combination teach the method of claim 1, Juneja further teaches further comprising, by the first automated assistant ([0109, asking a virtual assistant]; examiner’s note: automated assistant), but do not explicitly teach a filter term based on an inferred preference of the second user, wherein generating the modified set of search results is further based on the filter term. However, Goldstein teaches automatically determining, a filter term based on an inferred preference of the second user, wherein generating the modified set of search results is further based on the filter term ([0015, inferring a preference based on metadata in the communication. Using the inferred preference, the networked system triggers a search process], [0016, the tailored responses are based on preferences inferred from one or more messages], [0031, infer preferences from message metadata and the conversations]; examiner’s note: filter term is determined automatically based on an inferred preference of a user and the response is generated (modified search results)). One of ordinary skill in the art would recognize that incorporating the features i.e., filtering a term based on users interred preference and generating results based on the interred terms of Goldstein into the invention of Juneja, Rosenberg, Pearcy, Kanani to have an inferred preference. Juneja, Rosenberg, Pearcy, Kanani, Goldstein are analogous art because all the art teaches searching data. Therefore, it would have been obvious to one of the ordinary skills in the art before the elective filing date to incorporate the features of Goldstein into Juneja, Pearcy, Rosenberg, Kanani to have a system which will have filter terms based on users inferred preference. The motivation would be to have inferred preference filter term to find users interest level faster and find most appropriate query results and also to generate modified query results based on inferred preference of the user to recommend the users search results based on users interfered preference (Goldstein, [0031, the result preparation module 214 may recommend hotels close to the arena where the sports game or concert is occurring]). Claim 10 is rejected on the same basis of rejection of claim 4. Claim 16 is rejected on the same basis of rejection of claim 4. Claim(s) 22 is/are rejected under 35 U.S.C. 103 as being unpatentable over Juneja et al. (US 2023/0186941) and in view of Rosenberg et al. (US 2018/0316893) and in view of Pearcy (US 2013/0173569) and in view of Kanani et al. (US 2022/0197899) and in view of Miller et al. (US 2020/0134211). With respect to claim 22, Juneja, Rosenberg, Pearcy and Kanani in combination teach the method according to claim 1, Rosenberg further teaches automatically adding the second user to the query session Juneja, Rosenberg, Pearcy and Kanani do not explicitly teach that determining, by the first automated assistant, an environmental context associated with the first client device, However, Miller teaches determining, by the first automated assistant, an environmental context associated with the first client device, based on the environmental context indicating that the first query is not provided in a private setting ([0083, the personal assistant 315 can evaluate the privacy-level of the environment. Many different types of signals can be used to determine the privacy-level of the environment including, visual analysis, audio analysis, signal analysis, contextual data, and user inputs. In this case, the personal assistant 315 determines that the second user 325, third user 330, and fourth user 335 are present in the room. In this example, one or more of the other users do not have access to the meeting information and the environmental-privacy is determined to be public], determining that the environmental context is not private). One of ordinary skill in the art would recognize that incorporating the features i.e., determining environmental context indicating public environment of Miller into the invention of Juneja, Rosenberg, Pearcy, Kanani to determine environmental context. Juneja, Rosenberg, Pearcy, Kanani, Miller are analogous art because all the art teaches searching data. Therefore, it would have been obvious to one of the ordinary skills in the art before the elective filing date to incorporate the features of Miller into Juneja, Pearcy, Rosenberg, Kanani to have a system which will add users based on environmental context to enhance the search results to make the system more efficient. (Miller, [0044, the environmental privacy level may be classified as public when one or more people are present in the environment and are not known to have access to a data store from which information in the communication originated]). Claim(s) 23 is/are rejected under 35 U.S.C. 103 as being unpatentable over Juneja et al. (US 2023/0186941) and in view of Rosenberg et al. (US 2018/0316893) and in view of Pearcy (US 2013/0173569) and in view of Kanani et al. (US 2022/0197899) and in view of Stekkelpak (US 8,510,285). With respect to claim 23, Juneja, Rosenberg, Pearcy and Kanani in combination teach the method according to claim 1, Rosenberg further teaches automatically adding the second user to the query session Juneja, Rosenberg, Pearcy and Kanani do not explicitly teach accessing a query history associated with the second user; and determining a predicted interest level of the second user in the query session based on the query history associated with the second user, However, Stekkelpak teaches accessing a query history associated with the second user ([col. 4, lines 63-67, “Information about the user 101, such as an Internet browsing history and search query history, can be associated with an identifier 125 for the user 101, such as a user account for the user 101 or a cookie stored on the client device 102 of the user 101.”; accessing users history]); and determining a predicted interest level of the second user in the query session based on the query history associated with the second user, s further based on the predicted interest level of the second user in the query session satisfying a threshold ([col. 2, lines 50-57, “The search engine system predicts that the topic is likely of interest to the user when the confidence score satisfies a threshold. The search engine system may identify multiple topics, and can simultaneously provide content for multiple topics satisfying the threshold or provide content for a topic assigned the highest confidence score”], determining the predicted users interest satisfy a threshold and Rosenberg teaches adding second user, therefore, Rosenberg and Stekkelpak teaches the limitation). One of ordinary skill in the art would recognize that incorporating the features i.e., determining users predicted interest satisfying a threshold by accessing users query history of Stekkelpak into the invention of Juneja, Rosenberg, Pearcy, Kanani to determine environmental context. Juneja, Rosenberg, Pearcy, Kanani, Stekkelpak are analogous art because all the art teaches searching data. Therefore, it would have been obvious to one of the ordinary skills in the art before the elective filing date to incorporate the features of Stekkelpak into Juneja, Pearcy, Rosenberg, Kanani to have a system which will add users based on users predicted interest level satisfying a threshold to add only interested users to have a productive session to save users time. (Stekkelpak, [0044, the environmental privacy level may be classified as public when one or more people are present in the environment and are not known to have access to a data store from which information in the communication originated]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to FATIMA P MINA whose telephone number is (571)270-3556. The examiner can normally be reached Monday - Friday 9:00 am - 5:00 pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Ann Lo can be reached on 571-271-9767. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /FATIMA P MINA/ Examiner, Art Unit 2159 /ANN J LO/ Supervisory Patent Examiner, Art Unit 2159
Read full office action

Prosecution Timeline

Feb 26, 2024
Application Filed
Oct 31, 2024
Non-Final Rejection — §103, §112
Feb 05, 2025
Response Filed
Feb 05, 2025
Examiner Interview Summary
Feb 05, 2025
Applicant Interview (Telephonic)
Mar 05, 2025
Final Rejection — §103, §112
Jun 12, 2025
Examiner Interview Summary
Jun 12, 2025
Applicant Interview (Telephonic)
Jun 16, 2025
Response after Non-Final Action
Jun 26, 2025
Request for Continued Examination
Jul 02, 2025
Response after Non-Final Action
Aug 23, 2025
Non-Final Rejection — §103, §112
Dec 04, 2025
Applicant Interview (Telephonic)
Dec 04, 2025
Examiner Interview Summary
Dec 05, 2025
Response Filed
Mar 20, 2026
Final Rejection — §103, §112 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12475179
SYSTEM AND METHOD FOR USER CONTENT PERSONALIZATION
2y 5m to grant Granted Nov 18, 2025
Patent 12468671
HEALTH-BASED MANAGEMENT OF A NETWORK
2y 5m to grant Granted Nov 11, 2025
Patent 12380151
SEMANTIC TRANSLATION OF DATA SETS
2y 5m to grant Granted Aug 05, 2025
Patent 12373400
DYNAMIC METHODS FOR IMPROVING QUERY PERFORMANCE FOR A SECURE STORAGE SYSTEM
2y 5m to grant Granted Jul 29, 2025
Patent 12367251
BROWSER BASED ROBOTIC PROCESS AUTOMATION
2y 5m to grant Granted Jul 22, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
64%
Grant Probability
90%
With Interview (+25.6%)
4y 2m
Median Time to Grant
High
PTA Risk
Based on 402 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month