Prosecution Insights
Last updated: April 19, 2026
Application No. 17/913,021

INFORMATION PROCESSING APPARATUS, INTERACTIVE ROBOT, AND CONTROL METHOD FOR PROVIDING ASSISTANCE TO A CONVERSATION

Final Rejection §103§112
Filed
Sep 20, 2022
Examiner
SUBRAMANI, NANDINI
Art Unit
2656
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
4 (Final)
63%
Grant Probability
Moderate
5-6
OA Rounds
3y 2m
To Grant
99%
With Interview

Examiner Intelligence

Grants 63% of resolved cases
63%
Career Allow Rate
55 granted / 87 resolved
+1.2% vs TC avg
Strong +49% interview lift
Without
With
+49.4%
Interview Lift
resolved cases with interview
Typical timeline
3y 2m
Avg Prosecution
24 currently pending
Career history
111
Total Applications
across all art units

Statute-Specific Performance

§101
15.6%
-24.4% vs TC avg
§103
60.4%
+20.4% vs TC avg
§102
10.0%
-30.0% vs TC avg
§112
11.6%
-28.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 87 resolved cases

Office Action

§103 §112
DETAILED ACTION Introduction Applicant's submission filed on 11/27/2025 has been entered. Claims 1-2, 4-11 and 13-16 are pending in the application and have been examined. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Response to Amendment The response filed on 11/27/2025 has been correspondingly accepted and considered in this Office Action. Claims 1-2, 4-11 and 13-16 have been examined. Claim 3 has been cancelled. Applicant’s amendment to the claim 4 overcomes the claim objections set forth in the Office Action mailed 09/02/2025. Applicant’s amendments to claims 14, 16 overcome the previously set forth rejection under 35 U.S.C. 112(b), second paragraph previously set forth in the Office Action mailed 09/02/2025. Response to Arguments Applicant's arguments filed 11/27/2025 have been fully considered as follows: Applicant’s arguments with respect to claim 1 (also representative of claims 13) state that “Vuskovic describes that the automated assistant identifies shared interests of the participants that are already in the group chat based on profiles of the participants. However, Vuskovic does not teach or suggest that the automated assistant matches the users based on profile information to initialize a conversation. In fact, Vuskovic is directed towards providing new content/topic to an already existing chat/message group. However, Vuskovic nowhere teaches or suggests matching of users to initialize a new group chat.” The examiner respectfully disagrees, Vuskovic teaches identifying based on individual participant profiles to determine the shared interests of participants and select new content to be shared to continue the conversation, in Vuskovic [0072, 0075]. The broadest reasonable interpretation of “match, based on the received profile information, the two users to initialize a conversation” includes matching the shared interests of the two users ( there are only two users mentioned in the claim and no further matching as indicated in the remarks is provided in the claim) to mention common topics for conversation when the conversations become inactive as indicated in Vuskovic [0075]. Therefore, Vuskovic teaches match, based on the received profile information, the two users to initialize a conversation and therefore, the rejection of Claims 1, 14 rejected under 35 U.S.C. 103 are sustained and further updated accordingly. Further it is noted that Prior Art, Swanson, US Patent 10,360,615 further teaches matching profiles and suggesting topics for initiating conversations based on the profile shared interest of “architecture”(see Swanson, col 6, lines 28-37), hence Swanson teaches match, based on the received profile information, the two users to initialize a conversation. In response to the art rejection(s) of the remainder of dependent claims are rejected under 35 U.S.C 103, in case said claims are correspondingly discussed and/or argued for at least the same rationale presented in Remarks filed 11/27/2025, Examiner respectfully notes as follows. For completeness, should the mentioned claims be likewise traversed for similar reasons to independent claims 1, 13, 14 and 16 correspondingly, Examiner respectfully directs Applicant to the same previous supra reasons provided in the response directed towards claims 1, 13, 14 and 16 correspondingly discussed above. For at least the same supra provided reasons, Examiner likewise respectfully disagrees, and Applicant's arguments have been fully considered but they are not persuasive. Claim Rejections - 35 USC § 112 The following is a quotation of 35 U.S.C. 112(b): (b) CONCLUSION.—The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the inventor or a joint inventor regards as the invention. The following is a quotation of 35 U.S.C. 112 (pre-AIA ), second paragraph: The specification shall conclude with one or more claims particularly pointing out and distinctly claiming the subject matter which the applicant regards as his invention. Claims 14-16 and therefore claim 15 which depend therefrom are rejected under 35 U.S.C. 112(b) or 35 U.S.C. 112 (pre-AIA ), second paragraph, as being indefinite for failing to particularly point out and distinctly claim the subject matter which the inventor or a joint inventor (or for applications subject to pre-AIA 35 U.S.C. 112, the applicant), regards as the invention. Claims 14 and 16 recites the limitation "detect, after the beverage is served, an utterance of the first user in a conversation remotely with a second user, wherein a second interactive robot serves the beverage to the second user" in lines 4-6, followed by “matches, based on the received profile information, the first user and the second user to initialize the conversation” in lines 12-13 . It is indefinite as the conversation has been detected between the first user and second user to process further steps, followed by a match of the received profile information to initialize the conversation that was already detected. The language in the claim is indefinite for this limitation in the claim. Claim Rejections - 35 USC § 103 The text of those sections of Title 35, U.S. Code not included in this action can be found in a prior Office action. Claims 1-2, 6-7 and 13 are rejected under 35 U.S.C. 103 as being unpatentable over Vuskovic et. al, US PgPub 2019/0007228 in view of Petrick, et. al. "Planning for social interaction in a robot bartender domain." Proceedings of the International Conference on Automated Planning and Scheduling. Vol. 23. 2013 further in view of McColl, D et. al. "Meal-time with a socially assistive robot and older adults at a long-term care facility." Journal of Human-Robot Interaction 2.1 (2013): 152-171. Regarding claim 1, Vuskovic teaches an information processing apparatus comprising: receive profile information associated with each user of two users(see Vuskovic, [0072] discusses obtaining the participant profiles); match, based on the received profile information, the two users to initialize a conversation (see Vuskovic, [0072, 0075] discusses identifying the shared interests and selects new content to be shared between the participants and this is also presented audibly in standalone speaker as discussed in Vuskovic [0066] ; to initialize a conversation); at least one processor configured to analyze a plurality of utterances of two users having in a conversation through a network, wherein the plurality of utterances is detected by respective interactive robots used by associated with the two users (see Vuskovic, [0062] Fig. 4 participant 401 interacts with client device 406(interactive robot/bots) and other users via their respective devices); control transmission of the voice data to the respective interactive robots over the network(see Vuskovic, [0026] describes the voice interactions via the cloud/internet); and control the respective interactive robots to output a conversation assisting voice, wherein the output of the conversation assisting voice is based on the voice data and the conversation assisting voice (see Vuskovic, [0066] the GP automated assistant processes the conversation and proactively provides the following new content: “<movie> is playing at <cinemas> at 7:30 that night. You could also grab some Laotian food at <Laotian restaurant> beforehand. I see a table for two is available at 6. Shall I make a reservation?”). However, Vuskovic fails to teach each of the respective interactive robots includes a housing and a server mechanism to serve a beverage. However, Petrick teaches each of the respective interactive robots includes a housing and a server mechanism to serve a beverage( see Petrick, pg. 390, Overview of the Robot Bartender The target application for this work is a bartending scenario, using the robot platform shown in Figure 1. The robot hardware itself consists of two 6-degrees-of-freedom industrial manipulator arms with grippers, mounted to resemble human arms) ; the conversation assisting voice includes system utterance that provides assistance to the conversation(see Petrick, pg. 391 discusses in Figure 2 the discussion of the bartender scenario and pg. 394 Robot actions and discusses the Planning Interactions for Social Behavior ; this is interpreted as conversation assistant after alcoholic beverages are served). Vuskovic teach the conversation assistance based on the conversation between two users, however does not teach voice to be output after alcoholic beverages are served. Petrick teaches using outputting assisting voice with alcoholic beverage are served. Using the known technique of wake word as taught by Petrick, to provide the voice assisting(see Petrick, sect Overview of Robot Bartender) in the reference Vuskovic to provide conversation assisting voice, such as improved conversation assistance would have been obvious to one of ordinary skill in the art. However, Vuskovic in view of Petrick fail to teach generate voice data based on a length of time that exceeds a threshold period of time, wherein the length of time corresponds to a time period for which the conversation is halted; determine, based on sensor data, a drinking progress of the beverage of each of the two users; wherein the output of the conversation assisting voice is based on the voice data and the drinking progress of the beverage of each of the two users. However, McColl teaches generate voice data based on a length of time for which the conversation is halted exceeds a determined period of time, wherein the length of time corresponds to a time period for which the conversation is halted (see McColl, Fig. 6 FSA, The robot focuses a person’s attention to a particular dish or the beverage on the meal tray. The order in which a user is prompted to eat or drink is defined a priori based on a meal plan provided by the caregiver. The robot utilizes a finite-state acceptor (FSA) to determine which behaviors to implement. The FSA consists of a set of robot behavior states and triggering events, where the latter are provided by the meal-time monitoring system and are used to determine the appropriate robot behavior to display (Figure 6); pg. 157 The FSA can be adapted to utilize various sensory inputs to trigger behaviors based on consumption levels and/or time periods(determined period of time for conversation halted) for the beverage, depending on the meal plan implemented.1) prompting behaviors to drink); determine, based on sensor data, a drinking progress of the beverage of each of the two users(see McColl, pg. 156, Fig 3, Meal Tray Sensory System : the load cell sensors determine the change in weight of the beverage in the cup and time for no change in weight ; multiple trays will monitor for multiple users); control the respective interactive robots to output a conversation assisting voice, wherein the output of the conversation assisting voice is based on the voice data and the drinking progress of the beverage of each of the two users, and the conversation assisting voice includes a system utterance that provides assistance to the conversation(see McColl, pg. 158 based on the FSA, the prompting behavior is used to motivate the users to complete the given meal task based on Encourage and Orient, pg. 159 Table as per FSA and pg. 157 can be modified for conversation assists in the conversation/drinks). Vuskovic, Petrick and McColl are considered to be analogous to the claimed invention because they relate to methods for conversation interactions with humans. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Vuskovic in view of Petrick on the method for processing multiple person conversation with the use of the human-like socially assistive robot teachings of McColl to improve independent eating habits of elderly individuals and enhance their meal-time experience during the important self-maintenance activity of eating and drinking ( see McColl pg. 153 ). Regarding claim 2, Vuskovic in view of Petrick further in view of McColl teaches the information processing apparatus according to claim 1. Petrick further teaches the at least one processor is further configured to control the output of the conversation assisting voice based on a status of the conversation and the conversation is conducted after the beverage is served to each of the two users by the respective interactive robots (see Petrick, pg. 391 discusses in Figure 2 the discussion of the bartender scenario and pg. 394 Robot actions and discusses the Planning Interactions for Social Behavior ; this is interpreted as conversation assistant after alcoholic beverages are served ). The same motivation to combine as claim 1 applies here. Regarding claim 6, Vuskovic in view of Petrick further in view of McColl teaches the information processing apparatus according to claim 1. Vuskovic further teaches wherein the at least one processor is further configured to control the output of the conversation assisting voice and the conversation assisting voice indicates contents of information highlighted in news sites on the network(see Vuskovic, [0053] once one or more shared interests and/or conversation topics are identified, new developments related to those shared interests/topics (e.g., news stories) may be proactively provided, to one or more client devices operated by the participants using other interfaces and/or output devices). Regarding claim 7, Vuskovic in view of Petrick further in view of Corelli teaches the information processing apparatus according to claim 1. Vuskovic further teaches , in a case where a word associated with a Web service used by one of the two users is contained in the utterances of the two users, the at least one processor is further configured to determine a word in the plurality of utterances of each of the two users, wherein the word is associated with a Web service(see Vuskovic, [0048] discusses a group chat analysis service 138 which works with user controlled resources 128 ( to determine the word associated with a Web service (as per Specification[0149-0151])); and control the output of the conversation assisting voice based on a usage situation of the Web service (see Vuskovic, [0052] once the automated assistant detects the mention of football, the automated agent presents information on the schedule, tickets purchase link and also links for the game to be recorded ( web service usage) ). Regarding claim 13, is directed to a method claim corresponding to the information processing apparatus claim presented in claim 1 and is rejected under the same grounds stated above regarding claim 1. Claims 4-5, 8 and 9 are rejected under 35 U.S.C. 103 as being unpatentable over Vuskovic et. al, US PgPub 2019/0007228 in view of McColl, D et. al. "Meal-time with a socially assistive robot and older adults at a long-term care facility." Journal of Human-Robot Interaction 2.1 (2013): 152-171 further in view of Petrick, et. al. "Planning for social interaction in a robot bartender domain." Proceedings of the International Conference on Automated Planning and Scheduling. Vol. 23. 2013 further in view of. Regarding claim 4, Vuskovic in view of Petrick further in view of McColl teaches the information processing apparatus according to claim 1. Vuskovic teaches the two users include a first user and a second user (see Vuskovic, [0062] Fig. 4 participant 401 interacts with client device 406(interactive robot/bots) and other users(first user and a second user) via their respective devices). However, Vuskovic in view of Petrick further in view of McColl fails to teach based on a plurality of periods of time ofof the plurality of utterancesa first utterance in a first period of time, and the second user to generate a second utterance in a second period of time, the plurality of utterances is different from each of the first utterance and the second utterance, the plurality of periods of time is different from each of the first period of time and the second period of time, and the first period of timeless than the second period of time However, Wang teaches based on a plurality of periods of time ofof the plurality of utterancesa first utterance in a first period of time (see Wang, Fig. 8, previous speech, Wang [0177]), and the second user to generate a second utterance in a second period of time (see Wang, Fig. 8, current speech), the plurality of utterances is different from each of the first utterance and the second utterance, the plurality of periods of time is different from each of the first period of time and the second period of time (see Wang, Fig. 8, analysis, and threshold adjusting, determining intervention; Wang, Fig. 7), and the first period of timeless than the second period of time(see Wang, [0022, 0024, 0073] discusses conversation exchange frequency threshold to determine the intervention as shown in Wang, Fig. 5. A parameter such as a pause time and exchange frequency of the conversation may be detected during the user conversation to determine timing of a machine to intervene in the user conversation. In addition, the machine may actively wake-up to participate in the user conversation based on a result of determination of the timing, and may provide the users with corresponding feedback contents to smoothen the user conversation while satisfying the demand of the users and achieve more natural human-computer interaction; Wang Fig. 7 determines intervention timing and intervenes in user conversation ( per Specification [0084])). Vuskovic, Petrick, McColl and Wang are considered to be analogous to the claimed invention because they relate to methods for conversation assistance. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Vuskovic, Petrick and McColl on the method for processing multiple person conversation with the intervention in the conversation teachings of Wang to improve quality of voice assistant device in group conversations ( see Wang, Fig. 5 ). Regarding claim 5, Vuskovic in view of Petrick further in view of McColl teaches the information processing apparatus according to claim 1. However, Vuskovic in view of Petrick further in view of McColl fail to teach detect a silence of the two users for the threshold period of time; and control, based on the detected silence, the output of the conversation assisting voice which prompts each of the two users to generate the plurality of utterances. However, Wang further teaches wherein, at least one processor is further configured to detect a silence of the two users for the threshold period of time (see Wang, [0086] determines the conversation pause(silence) time threshold ); and control, based on the detected silence, the output of the conversation assisting voice which prompts each of the two users to generate the plurality of utterances (see Wang,[0086] intervene based on conversation pause time is greater than a conversation pause time threshold , Wang Fig. 5 intelligent system actively joins the conversation ). The same motivation as claim 4 applies here. Regarding claim 8, Vuskovic in view of Petrick further in view of McColl teaches the information processing apparatus according to claim 1. However, Vuskovic in view of Petrick further in view of McColl fail to teach analyze emotions of each of the two users based on the plurality of utterances between the two users; and control the output of the conversation assisting voice based on a result of the analysis of the analyzed emotions of each of the two users. However, Wang further teaches wherein the (see Wang, [0073] discusses multi-user conversation being analyzed based on user emotions); and control the output of the conversation assisting voice based on (see Wang, [0089, 0103, 0104] the user emotion parameter is included in the conversation parameter to determine the intervention conversation, Wang, Fig. 5, Feedback content and intervention timing to output the conversation of the intelligent system). The same motivation as claim 4 applies here. Regarding claim 9, Vuskovic in view of Petrick further in view of McColl further in view of Wang teaches the information processing apparatus according to claim 8. Wang further teaches wherein the at least one processor is further configured to control the output of the conversation assisting voice indicating preferences of one of the two users having a negative emotion, and the preferences are identified based on (see Wang, [0093-0095][ 0179, 0180, 0184] recognition of user emotion and altering the conversation pause time threshold for intervention, Wang [0130, 0176] different feedback content based on user preferences ). The same motivation as claim 8 applies here. Claim 10-11 are rejected under 35 U.S.C. 103 as being unpatentable over Vuskovic et. al, US PgPub 2019/0007228 in view of McColl, D et. al. "Meal-time with a socially assistive robot and older adults at a long-term care facility." Journal of Human-Robot Interaction 2.1 (2013): 152-171 further in view of Petrick, et. al. "Planning for social interaction in a robot bartender domain." Proceedings of the International Conference on Automated Planning and Scheduling. Vol. 23. 2013 further in view of Koukoumidis et. al. US PgPub. 2018/0181854. Regarding claim 10, Vuskovic in view of Petrick further in view of McColl teaches the information processing apparatus according to claim 1. Vuskovic teaches control a device installed together with each of the respective interactive robots in respective spaces where and the two users are present in the respective spaces (see Vuskovic, Fig. 4, [0066] discusses the output interactions of various client devices as shown in Vuskovic, Fig. 1. ) However, Vuskovic in view of Petrick further in view of McColl fails to teach analyze emotions of the two users based on the plurality of utterances of the two users. However, Koukoumidis further teaches analyze emotions of the two users based on the plurality of utterances of the two users(see Koukoumidis, [0058] discusses recognition of the emotion of various participants); and control a device installed with each of the respective interactive robots in respective spaces and the two users are present in the respective spaces. (see Koukoumidis, [0058] based on the emotional state the appropriate response is delivered in the respective devices(spaces)). Vuskovic, Petrick, McColl and Koukoumidis are considered to be analogous to the claimed invention because they relate to methods for conversation assistance. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Vuskovic, Petrick and McColl on the method for processing multiple person conversation with the device voice response based on emotional state of the users teachings of Koukoumidis to improve quality of the team-member interactions ( see Koukoumidis, [0001] ). Regarding claim 11, Vuskovic in view of Petrick further in view of McColl further in view of Koukoumidis teaches the information processing apparatus according to claim 10. McColl further teaches transmit one of a first control command that controls the device to the respective interactive robots to control the device through the respective interactive robots, or a second control command that transmits the first control command to respective mobile terminals to control the device (see McColl, pg. 158-159 discusses the robot to prompt the appropriate behavior response based on the state of the user. McColl, Fig. 7, Fig. 8). Claim 14-16 is rejected under 35 U.S.C. 103 as being unpatentable over Wolff et. al, US PgPub 2020/0349933 in view of McColl, D et. al. "Meal-time with a socially assistive robot and older adults at a long-term care facility." Journal of Human-Robot Interaction 2.1 (2013): 152-171 further in view of Petrick, et. al. "Planning for social interaction in a robot bartender domain." Proceedings of the International Conference on Automated Planning and Scheduling. Vol. 23. 2013 further in view of Vuskovic et. al, US PgPub 2019/0007228. Regarding claim 14, Wolff teaches at least one processor configured to; detect, after the beverage, is served an utterance of the first user having a conversation remotely with a second user (see, Wolff [0036] A video processor 330 is in communication with the camera 310 and the dialog manager 305. The video processor 330 receives the video signal and applies visual speech activity detection to the video signal to generate a visual speech activity signal. The dialog manager 305 receives the visual speech activity signal(interpreted as after alcoholic beverage is served)); control transmission of voice data of the detected utterance of the first user to an information processing apparatus, wherein the information processing apparatus analyzes the utterance of the first user and an utterance of the second user (see, Wolff [0036] A video processor 330 is in communication with the camera 310 and the dialog manager 305. The video processor 330 receives the video signal and applies visual speech activity detection to the video signal to generate a visual speech activity signal. The dialog manager 305 receives the visual speech activity signal(interpreted as after alcoholic beverage is served) and can use it for dialog detection, in addition to using the speech activity signal 326 (dialog manager interpreted as information processing apparatus to analyze the conversation )), control the conversation between the first user and the matched second user (see Wolff, [0037-0038] detects the pause prediction based on the dialog and type of interruptions and generates a trigger prompt (control the conversation) as shown in Wolff, Fig. 3); and control output of a conversation assisting voice based on the voice data of the system utterance that is transmitted from the information processing apparatus, wherein the conversation assisting voice includes the system utterance that provides assistance to the conversation (see Wolff, [0034] the dialog manager considers the ongoing conversation and according provide the speech prompt according Wolff [0035]; interpreted to provide assistance to the conversation ). However Wolff fails to teach housing that includes an opening configured to serve a beverage to a first user; wherein a second interactive robot serves the beverage to the second user. However, Petrick teaches housing that includes an opening configured to serve a beverage to a first user (see Petrick, pg. 390, Overview of the robot bartender describes the robot bartender serving the beverage, The target application for this work is a bartending scenario, using the robot platform shown in Figure 1. The robot hardware itself consists of two 6-degrees-of-freedom industrial manipulator arms with grippers, mounted to resemble human arms ) ; wherein a second interactive robot serves the beverage to the second user (see Petrick, Fig. 2 discusses robot serving to second user; interpreted as second robot). Wolff teaches the conversation assistance based on the conversation between two users, however does not teach voice to be output after alcoholic beverages are served. Petrick teaches using outputting assisting voice with alcoholic beverage are served. Using the known technique of wake word as taught by Petrick, to provide the voice assisting(see Petrick, sect Overview of Robot Bartender) in the reference Wolff and to provide conversation assisting voice, such as improved conversation assistance would have been obvious to one of ordinary skill in the art. However, Wolff in view of Petrick fail to teach generates voice data of a system utterance based on a length of time that exceeds a threshold period of time, wherein the length of time corresponds to a time period for which the conversation is halted, and determines, based on sensor data, a drinking progress of the beverage of each of the first user and the second user; control output of a conversation assisting voice based on the drinking progress of the beverage. However, McColl teaches generates voice data of a system utterance based on a length of time that exceeds a threshold period of time, wherein the length of time corresponds to a time period for which the conversation is halted (see McColl, Fig. 6 FSA, The robot focuses a person’s attention to a particular dish or the beverage on the meal tray. The order in which a user is prompted to eat or drink is defined a priori based on a meal plan provided by the caregiver. The robot utilizes a finite-state acceptor (FSA) to determine which behaviors to implement. The FSA consists of a set of robot behavior states and triggering events, where the latter are provided by the meal-time monitoring system and are used to determine the appropriate robot behavior to display (Figure 6); pg. 157 The FSA can be adapted to utilize various sensory inputs to trigger behaviors based on consumption levels and/or time periods(determined period of time for conversation halted) for the beverage, depending on the meal plan implemented.1) prompting behaviors to drink) and determines, based on sensor data, a drinking progress of the beverage of each of the first user and the second user; (see McColl, pg. 156, Fig 3, Meal Tray Sensory System : the load cell sensors determine the change in weight of the beverage in the cup and time for no change in weight; multiple trays, robots will monitor for multiple users); control output of a conversation assisting voice based on the drinking progress of the beverage and the voice data of the system utterance that is transmitted from the information processing apparatus, wherein the conversation assisting voice includes the system utterance that provides assistance to the conversation (see McColl, pg. 158 based on the FSA, the prompting behavior is used to motivate the users to complete the given meal task based on Encourage and Orient, pg. 159 Table as per FSA and pg. 157 can be modified for conversation assists in the conversation/drinks).. Wolff, Petrick and McColl are considered to be analogous to the claimed invention because they relate to methods for conversation interactions with humans. Therefore, it would have been obvious to someone of ordinary skill in the art before the effective filing date of the claimed invention to have modified the teachings of Wolff in view of Petrick on the method for processing multiple person conversation with the use of the human-like socially assistive robot teachings of McColl to improve independent eating habits of elderly individuals and enhance their meal-time experience during the important self-maintenance activity of eating and drinking ( see McColl pg. 153 ). However, Wolff in view of Petrick further in view of McColl fail to teach receives profile information associated with each of the first user and the second user, matches based on the received profile information, the first user and the second user to initialize the conversation. However, Vuskovic teaches receives profile information associated with each of the first user and the second user (see Vuskovic, [0072] discusses obtaining the participant profiles), matches based on the received profile information, the first user and the second user to initialize the conversation(see Vuskovic, [0072, 0075] discusses identifying the shared interests and selects new content to be shared between the participants and this is also presented audibly in standalone speaker as discussed in Vuskovic [0066] ; to initialize a conversation ( as claim has already detected conversation) ). Wolff, Petrick and McColl teach methods for conversation interactions with humans, however does not teach providing topics for conversation. Vuskovic teaches using profiles of participants to determine shared interests. Using the known technique of prompting with shared topics as taught by Vuskovic to provide the conversation assistance in the reference of Wolff, Petrick and McColl to provide conversation assisting voice, such as improved conversation assistance would have been obvious to one of ordinary skill in the art. Regarding claim 15, Wolff in view of Petrick further in view of McColl further in view of Vuskovic teach the interactive robot of claim 14. McColl further teaches detect a remaining amount of the beverage of each of the first user and the second user(see McColl, pg. 156, Fig 3, Meal Tray Sensory System : the load cell sensors determine the change in weight of the beverage in the cup and time for no change in weight; multiple trays, robots will monitor for multiple users); and control transmission of information indicative of the detected remaining amount of the beverage to the information processing apparatus (McColl, pg. 156, In general, the meal tray sensing platform is used to monitor the following meal-time activities: 2) the cup has been lifted up for drinking (decrease in cup weight to zero), 3) the beverage in the cup has been consumed (small decrease in cup weight).McColl, Pg. 158, Fig. 7, Fig. 8 The robot 3) prompting behaviors to drink the beverage. Each set of prompting behaviors is used to: 1) lift a beverage, and 2) drink a beverage). Regarding claim 16, is directed to a method claim corresponding to the interactive robot claim presented in claim 14 and is rejected under the same grounds stated above regarding claim 14. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Djugash., US Patent 9,355,368 teaches methods for actively and automatically providing personal assistance, using a robotic device/platform, based on detected data regarding a user and the user's environment. (see Djugash, Abstract). Kawamura et. al., US Patent 11,093,995 teaches monitoring of customer consumption activity and management based on monitoring (see Kawamura, Fig. 7). Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to NANDINI SUBRAMANI whose telephone number is (571)272-3916. The examiner can normally be reached Monday - Friday 12:00pm - 5:00 pm EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Bhavesh M Mehta can be reached at (571)272-7453. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /NANDINI SUBRAMANI/ Examiner, Art Unit 2656 /BHAVESH M MEHTA/ Supervisory Patent Examiner, Art Unit 2656
Read full office action

Prosecution Timeline

Sep 20, 2022
Application Filed
Aug 05, 2024
Non-Final Rejection — §103, §112
Nov 14, 2024
Response Filed
Feb 12, 2025
Final Rejection — §103, §112
Apr 18, 2025
Response after Non-Final Action
May 19, 2025
Request for Continued Examination
May 20, 2025
Response after Non-Final Action
Aug 28, 2025
Non-Final Rejection — §103, §112
Nov 27, 2025
Response Filed
Jan 14, 2026
Final Rejection — §103, §112
Jan 14, 2026
Examiner Interview (Telephonic)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12562177
CONFERENCE ROOM SYSTEM AND AUDIO PROCESSING METHOD
2y 5m to grant Granted Feb 24, 2026
Patent 12561629
IDENTIFYING REGULATORY DATA CORRESPONDING TO EXECUTABLE RULES
2y 5m to grant Granted Feb 24, 2026
Patent 12505302
SYSTEMS AND METHODS RELATING TO MINING TOPICS IN CONVERSATIONS
2y 5m to grant Granted Dec 23, 2025
Patent 12468884
Machine Learning-Based Argument Mining and Classification
2y 5m to grant Granted Nov 11, 2025
Patent 12450434
NEURAL NETWORK BASED DETERMINATION OF EVIDENCE RELEVANT FOR ANSWERING NATURAL LANGUAGE QUESTIONS
2y 5m to grant Granted Oct 21, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
63%
Grant Probability
99%
With Interview (+49.4%)
3y 2m
Median Time to Grant
High
PTA Risk
Based on 87 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month