Prosecution Insights
Last updated: April 19, 2026
Application No. 18/788,736

CONVERSATION-BASED ASSISTANCE USING A DISPLAY FREE BODY WEARABLE COMPUTING DEVICE

Non-Final OA §103
Filed
Jul 30, 2024
Examiner
ABEBE, DANIEL DEMELASH
Art Unit
2657
Tech Center
2600 — Communications
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
89%
Grant Probability
Favorable
1-2
OA Rounds
2y 7m
To Grant
97%
With Interview

Examiner Intelligence

Grants 89% — above average
89%
Career Allow Rate
907 granted / 1014 resolved
+27.4% vs TC avg
Moderate +7% lift
Without
With
+7.3%
Interview Lift
resolved cases with interview
Typical timeline
2y 7m
Avg Prosecution
23 currently pending
Career history
1037
Total Applications
across all art units

Statute-Specific Performance

§101
11.3%
-28.7% vs TC avg
§103
29.9%
-10.1% vs TC avg
§102
28.2%
-11.8% vs TC avg
§112
8.6%
-31.4% vs TC avg
Black line = Tech Center average estimate • Based on career data from 1014 resolved cases

Office Action

§103
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Examiner’s Note Examiner has cited particular columns and line numbers or figures in the references as applied to the claims below for the convenience of the applicant. Although the specified citations are representative of the teachings in the art and are applied to the specific limitations within the individual claim, other passages and figures may apply as well. It is respectfully requested from the applicant, in preparing the responses, to fully consider the references in entirety as potentially teaching all or part of the claimed invention, as well as the context of the passage as taught by the prior art or disclosed by the examiner. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 10-15, 17-19 are rejected under 35 U.S.C. 103 as being unpatentable over Shreeshreemal (Shree) et al. (US 2021/0056968) and in view of Newell et al. (US 2021/0256046). As to claim 1, Shree teaches a method for obtaining information using a display free body wearable computing device Fig.1, 102, the method comprising: identifying that a user of the display free body wearable computing device is having a conversation 1202; based on the identifying: obtaining, using at least one sensor, 206, 210, 212 of the display free body wearable computing device, a transcription 420 of at least a portion of the conversation; obtaining, based on the transcription, a semantic/context analysis package 1204 for the conversation (Par.95, 161, 169-170) ; making, based at least on the semantic/context analysis package 428, a determination regarding whether it is desirable for the display free body wearable computing device to intervene 1208 (Fig.11, 1108) in the conversation; in a first instance of the determination where it is desirable: generating, based at least on the semantic analysis package, supplementary information; prompting the user to attempt to intervene in the conversation using the supplementary information (Figs.23, 24; Pars.87, 135, 253); and in a first instance of the prompting where the user agrees to allow the attempt to intervene: attempting to provide 1212 the supplementary information (Pars.208, 218, 222-224; Figs.12-24) PNG media_image1.png 556 506 media_image1.png Greyscale It is noted that while Shree teaches outputting the response (mainly to the second user) he doesn’t explicitly teach where the response is provided to the first user. However, Newell teaches a conversation support systems and methods (Figs.1-2) operable to assist a first user to more fully participate in an ongoing conversation, comprising determining subject matter of a current portion of the ongoing conversation based on the dialogue; selects conversation support information that pertains to the current conversation subject; and generating a conversation queue that includes information that corresponds to the selected conversation support information, wherein the conversation queue is communicated from the conversational support system to at least one conversation queue output device that presents the conversation support information to the first user (abstract, Pars.24-25, 28, 37-42). The combination of the analogous systems would be obvious to one of ordinary skill in the art before the time of applicant’s invention for the purpose of making the conversation assistance discrete by providing it on the user output device so that the response can only be seen and heard by the user. As to claim 2, Shree teaches obtaining, using at least one audio sensor of the display free body wearable computing device, audio data; and identifying that the audio data comprises speech between a plurality of speakers, the plurality of speakers comprising the user of the display free body wearable computing device and at least one other person (Figs.2, 4-5, 12). As to claim 3, Newell teaches where the system comprises a conversation map comprising a plurality of conversation segments of audio information each associated with the identity of one of a plurality of conversation participants, where different conversation participants are speaking at different times during the ongoing conversation and the individual conversation segments are each associated with a particular individual participant in the ongoing conversation (Pars.46-48, 64-68, Fig.1). As to claims 10-11, Newell teaches wherein the display free body wearable computing device comprises: an integrated sensing and interaction component adapted to: be positioned symmetrically on two portions of a user's head, be positioned between ears and eyes of the user, and capture a stereo image of at least a portion of a scene present in a field of view of the user; an integrated computing, powering, and securing portion; and an adjustment member adapted to position the integrated sensing and interaction component with respect to the integrated computing, powering, and securing portion, comprising a pair of cameras; speakers; a microphone array; and a touch pad. (Fig.1). As to claim 12, Newell teaches wherein the integrated sensing and interaction component is adapted to: obtain an audio input from the integrated sensing and interaction component; perform, by the data processing system, a speech recognition action set, based on the audio input, to obtain a speech recognition result; obtain a portion of data from a remote entity, the data being based at least in part on the speech recognition result; and use the portion of the data to assist in an interaction that the user is involved in (Figs.1-3). Regarding claims 13-15 and 17-19, the corresponding system and instructions, comprising the steps similar to the claims addressed above are analogous, therefore rejected as being unpatentable of Sheer et al. and in view of Newell et al. for the foregoing reasons. Claim(s) 4-5, 7-9, 16 and 20 are rejected under 35 U.S.C. 103 as being unpatentable over Shreeshreemal (Shree) et al. (US 2021/0056968) in view of Newell et al. (US 2021/0256046) and further in view of Alkan et al. (US 2020/0159836 now US patent 12,411, 843). As to claims 4, 16 and 20, Shree teaches wherein obtaining the semantic analysis package comprises: prompting a large language model 412 to identify, in the transcription, at least: topics of the conversation; questions regarding the topics discussed during the conversation; and determining if the second user is satisfied with the response of the primary user (Pars.8-9, 120-130, 134; Figs.10/11, 1008/1108), but, he doesn’t explicitly teach determining disagreements over the response. However, Alkan teaches a conversation assistance system that monitor and intervene to provide intelligent resolution of conflicting information in a conversation between a group of users, by semantically analyzing the conversation to identify a disagreement/conflict (questions with no answer (or no correct answer) provided, questions with multiple inconsistent answers, When users are in a conflict (e.g., cannot decide on an answer for a raised question during conversation), and providing a response to assist the user/users (Pars.18-29). The combination of the analogous systems would be obvious to one of ordinary skill in the art before the time of applicant’s invention for the purpose of identifying and outputting resolution to disagreements that occur during the conversation, thereby improving the support provided by conversational assistance. As to claim 5, Alkan teaches wherein the levels of disagreement regarding potential answers to the questions comprises: uncertainty levels in the questions present in the conversation; and levels of debate in the questions present in the conversation (Pars.26-29, 78, 95). As to claim 7, Alkan teaches wherein generating, based at least on the semantic analysis package, the supplementary information comprises: prompting, using at least one of the questions, a generative model to obtain an answer (Fig.4, 402; Pars.30-32, 82-84). As to claim 8, Sheer teaches wherein prompting the user to attempt to intervene in the conversation using the supplementary information comprises: discretely notifying the user of availability of the answer and monitoring for user feedback based on the notifying; in a first instance of the notifying where the user provides user feedback indicating agreement to the intervening: concluding that the user desires the intervening; and in a second instance of the notifying where the user does not provide user feedback: concluding that the user does not desire the intervening (Figs.24-29). As to claim 9, Newell teaches wherein attempting to provide the user the supplementary information comprises: initiating discretely providing of the answer to the user while monitoring for user input during the providing; in an instance of the initiating where the user provides the user input indicating that the answer is unwelcome: terminating the providing before completing of the providing of the answer to the user (Pars.9, 16, 28-29, 101). Allowable Subject Matter Claim 6 is objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. The following is a statement of reasons for the indication of allowable subject matter: claim 6 is allowable because the prior arts alone or in combination do not teach wherein making the determination comprises: grading, using a rubric, the levels of disagreement regarding potential answers to the questions corresponding to the topics and the questions to obtain grades for the topics and the questions; identifying whether at least one of the grades exceeds a grades threshold; and in a first instance of the identifying where the at least one of the grades exceeds the grades threshold: concluding that it is desirable; and in a second instance of the identifying where none of at least one of the grades exceeds the grades threshold: concluding that it is not desirable. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Rothschild et al. (US 2025/0342833). A device for assisting a respondent in a conversation between a querier and the respondent, the device comprising: a controller communicatively coupled to a microphone, wherein the controller is configured to: fetch a voice input from the microphone; detect a silent period during a vocal conversation between the querier and the respondent, wherein the silent period has a duration; compare the duration of the silent period with a threshold time period; and trigger a transmitter to transmit the voice input to a server upon detecting that the duration of the silent period is greater than the threshold time period, the server being configured to: generate an output corresponding to the voice input received by the server, wherein the output comprises at least one token as a response to an excerpt from the voice input; and, transmit the output to a receiver; and a speaker communicatively coupled with the receiver and configured to generate a voice-based response based on the output, for assisting the respondent in responding to the conversation. Any inquiry concerning this communication or earlier communications from the examiner should be directed to DANIEL DEMELASH ABEBE whose telephone number is (571)272-7615. The examiner can normally be reached monday-friday 7-4. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Daniel Washburn can be reached at 571-272-5551. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /DANIEL ABEBE/Primary Examiner, Art Unit 2657
Read full office action

Prosecution Timeline

Jul 30, 2024
Application Filed
Feb 17, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12597420
ENABLING USER-CENTERED AND CONTEXTUALLY RELEVANT INTERACTION
2y 5m to grant Granted Apr 07, 2026
Patent 12592235
NLU-BASED SYSTEMS AND METHOD FOR THE FACILITATED CONTROL OF INDUSTRIAL ASSETS
2y 5m to grant Granted Mar 31, 2026
Patent 12579380
SOCIO-MINDFULNESS IN MULTI-PARTY DISCUSSIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12566585
SCOPE WITH TEXT AND SPEECH COMMUNICATION SYSTEM
2y 5m to grant Granted Mar 03, 2026
Patent 12567411
VOICE INTERACTION METHOD AND ELECTRONIC DEVICE
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
89%
Grant Probability
97%
With Interview (+7.3%)
2y 7m
Median Time to Grant
Low
PTA Risk
Based on 1014 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month