Prosecution Insights
Last updated: April 19, 2026
Application No. 18/788,774

INFERRING USER INTENT FOR ASSISTANCE USING A DISPLAY FREE BODY WEARABLE COMPUTING DEVICE

Non-Final OA §102
Filed
Jul 30, 2024
Examiner
OPSASNICK, MICHAEL N
Art Unit
2658
Tech Center
2600 — Communications
Assignee
DELL PRODUCTS, L.P.
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
3y 3m
To Grant
92%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
737 granted / 900 resolved
+19.9% vs TC avg
Moderate +10% lift
Without
With
+10.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 3m
Avg Prosecution
46 currently pending
Career history
946
Total Applications
across all art units

Statute-Specific Performance

§101
17.7%
-22.3% vs TC avg
§103
33.0%
-7.0% vs TC avg
§102
29.9%
-10.1% vs TC avg
§112
6.3%
-33.7% vs TC avg
Black line = Tech Center average estimate • Based on career data from 900 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Objections Claims objected to because of the following informalities: claim 13 refers to the “system of claim 10”, wherein, claim 10 is a method claim. Clearly, a typographical error has occurred. Appropriate correction is required. However, once claim 13 is changed to “method”, then claim 14 will have similar issues. Therefore, in claims 13/14, change “system of claim” to “method of claim”. Correction is required. Claim Rejections - 35 USC § 102 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(2) the claimed invention was described in a patent issued under section 151, or in an application for patent published or deemed published under section 122(b), in which the patent or application, as the case may be, names another inventor and was effectively filed before the effective filing date of the claimed invention. Claim(s) 1-20 are rejected under 35 U.S.C. 102(a)(2) as being anticipated by vonLiechtenstein (20250225995). As per claims 1,9, vonLiechtenstein (20250225995) teaches a method for providing assistance to users of display free body wearable computing devices (see figure 1), the method comprising: identifying, using a first sensor of a display free body wearable computing device of the display free body wearable computing devices (figure 1, wearable glasses, earpiece, and para 0042 – other wearable devices), that a user of the display free body wearable computing device is speaking (as the recognized user is speaking into these devices – para 0101); based on the identifying: inferring, using at least a second sensor of the display free body wearable computing device, whether at least one other person is in a detection range of the first sensor (as, detecting a second person – para 0079; using distance measurements – para 0086-0088); in a first instance of the inferring where no other persons are inferred as being in the detection range of the first sensor (as, having the ability to isolate one speaker, based on direction and distance, from other possible speakers nearby – para 0090, last half): obtaining an assistance request outcome based on a transcription of the speaking by the user, an intention analysis prompt, and a large language model (as analyzing phonemes to generate words and sentences – para 0097, first sentence; analyzing content/context and generating context based keywords – para 0098, using language models – para 0099); and a first instance of the obtaining where the assistance request outcome indicates that the speaking is directed to the display free body wearable computing device; providing, by the display free body wearable computing device, computer-implemented services that are based, at least in part, on the transcription (as, the speaking results are guided through the wearable glasses and earpiece – see fig. 3; based on the transcripted keywords – para 0098, and showing the results – para 0094). Furthermore to the inferring in claim 9, the similar steps in claim 9 are addressed above in the mapping to claim 1; the differential in claim 9, is the concept of trigger phrases – which is taught by vonLiechtenstein (20250225995), see para 0075, command phrases are uttered. As per claim 2, vonLiechtenstein (20250225995) teaches the method of claim 1, wherein identifying that the user of the display free body wearable computing device is speaking comprises: obtaining audio data using the first sensor, the first sensor being an audio sensor positioned to capture the speaking (as, microphones capturing the speech – para 0042, first 6 sentences, including “the sound waves associated with speaking); and identifying that the audio data comprises the speaking by the user (as identifying the user via speaker recognition – para 0101). As per claim 3, vonLiechtenstein (20250225995) teaches the method of claim 1, wherein inferring whether the at least one other person is in the detection range of the first sensor comprises: identifying, using at least one image sensor of the display free body wearable computing device, whether the at least one other person is present in a field of view of the at least one image sensor (as image detecting, -- para 0070; examiner notes that in para 0070, objects are identified; however, a “stand by” person can also be near the user – para 0054); and identifying, using at least two audio sensors (figure 3) of the display free body wearable computing device (figure 1), whether the at least one other person is present within a distance threshold relative to the user (as calculating a distance to the source – para 0086 – 0088, taking into account distance, and relative to other audio source distances – see para 0090, last half). As per claim 4, vonLiechtenstein (20250225995) teaches the method of claim 3, wherein the at least two audio sensors comprise at least one audio sensor adapted to capture audio data from a direction behind the user’s back while the user wears the display free body wearable computing device (as, multiple microphones – see Figure 3, wherein the head mountable device 319, hand held/wrist attached device 315, and ear mountable device 314., all have microphones, and the cone of hearing/detection is not limited to sounds from the front – see Figure 1). As per claim 5, vonLiechtenstein (20250225995) teaches the method of claim 1, wherein obtaining the assistance request outcome comprises: prompting, using the prompt, the large language model to identify, in the transcription, a user intended target of the speaking (as, identifying the target from the transcription – using a context-specific language model converting the phoneme input into words and sentences – para 0097, and then using keyword matching to determine the intended target – see last half of para 0097, identifying “fruits and vegetables” and “market”). As per claim 6, vonLiechtenstein (20250225995) teaches the method of claim 5, wherein prompting the large language model comprises: inferring, using the large language model and the prompt (see mapping in claim 1, toward the context based language models and input from the user), whether the speaking comprises a question and/or command directed by the user to the display free body wearable computing device (as, in para 0070, a search query by the user – “turn on” is an example of a command and “find on amazon” is in query form – English language short form, of “Can you find on amazon ?), wherein obtaining the assistance request outcome further comprises: in a first instance of the inferring where the user intent comprises a question and/or command directed to the display free body wearable computing device; generating the assistance request outcome to indicate that that the speaking indicates that the display free body wearable computing device is being queried by the user for assistance (as, the speaking results are guided through the wearable glasses and earpiece – see fig. 3; based on the transcripted keywords – para 0098, and showing the results – para 0094). As per claim 7, vonLiechtenstein (20250225995) teaches the method of claim 6, wherein providing the computer-implemented services comprises: identifying whether the question and/or command refers to at least one object present in a field of view of the user(as matching the object, in the field of view, with the query – para 0070); in a first instance of the identifying where the question and/or command refers to the at least one object present in the field of view of the user; capturing, using at least one image sensor of the display free body wearable computing device, an image of the at least one object; and performing, by the display free body wearable computing device and using the image and the question and/or command, a first action set to provide the computer-implemented services (as, identifying the object in a field of view – such as, a lamp, or item of merchandise, and then generating a query/command to find a similar object on amazon – para 0070); in a second instance of the identifying where the question and/or command does not refer to the at least one object present in the field of view of the user; performing, by the display free body wearable computing device and using the question and/or command, a second action set to provide the computer-implemented services (as, not recognizing an object in the field of view, generating the query “what is this”, and then displaying the found results, either speech synthesized or on the eyeglass wear – para 0074). As per claim 8, vonLiechtenstein (20250225995) teaches the method of claim 7, wherein performing the first action set comprises: re-prompting the large language model to obtain an answer to the question and/or information usable to perform the command (as, after an initial search for an observable object – showing an image of the found object, and after that, resubmitting the query from the user “What is this?, and audibly returning the result through the earpiece – para 0074). As per claims 10,11,13, the claim scope is toward the physical elements performing the steps of method claim 1. These elements are taught in vonLiechtenstein (20250225995), see Figures 1,5 for the eyepiece/earpiece containing microphones, image capture (camera) and display, and integrated power and processing elements; see also para 0051 showing head mountable device, ear mountable device, computing component, Bluetooth communications, and the like. Including loudspeakers – para 0012. Cameras and image capture – para 0042, in the smart glasses. As per claim 12, vonLiechtenstein (20250225995) teaches the method of claim 11, wherein the integrated sensing and interaction component is adapted to: obtain the stereo image from the pair of cameras; at least partially process the stereo image to obtain an image processing result; identify an action to be performed based, at least in part, on the image processing result and a derived result from a remote entity, the derived result being based, at least in part, on the stereo image and/or the image processing result; and use at least the speakers to perform the action. (as, applied to the claims above, in the instance, of using the camera to capture an image of an object; and based on the command/query, finding the object on a separate software package – see para 0070 – 0074, wherein a lamp-object is identified in the users’ view, an image is captured of the lamp, and with the command “Find on Amazon”, identifying the lamp type/model/etc., by accessing the webservice amazon, and returning the result to the user; as shown at the end of para 0074, the output can be in audible form, through the loudspeakers found on the earpiece. The visual version, can be shown on the smart glasses – see para 0073, and beginning of para 0074). As per claim 14, vonLiechtenstein (20250225995) teaches an audio version of claim 13, which is based on speech recognition – see para 0097, performing phoneme recognition into words/sentences, develop keyword spotting, and generating the result – finding a location that has fruit and vegetables. Claims 15-17 are non-transitory computer readable medium claims, that perform the steps found throughout method claims 1-14 above and as such, claims 15-17 are similar in scope and content to claims 1-14; therefore, claims 15-17 are rejected under similar rationale as presented against claims 1-14 above. Furthermore, vonLiechtenstein (20250225995) teaches non-transitory computer readable media storing executable instructions performing the disclosed steps (para 0110). Claims 18-20 are data processing system claims, that perform the steps found throughout method claims 1-14 above and as such, claims 18-20 are similar in scope and content to claims 1-14; therefore, claims 18-20 are rejected under similar rationale as presented against claims 1-14 above. Furthermore, vonLiechtenstein (20250225995) teaches processors performing the steps – para 0032, as an example, accessing memory structures – para 0028. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Please see related art listed on the PTO-892 form. Furthermore, the following references were found, teaching, certain claim features and specification elements: Garg et al (20240221738) teaches multi-device user mountable – fig. 6b, 7 to process speech/other sensor input, into language/context models, to generate a result – fig. 8 Du et al (20230071778) teaches user wearable sensors that perform eye tracking, gaze tracking, depth sensors, and camera capture (see figure 1) that distinguishes multiple other speakers based on beamforming techniques (fig. 4) Rochford et al (20190019508) teaches headgear (fig. 4a, 4b, 5a-5b) that uses sensors to determine image/eye/lip movement, and processes verbal commands accordingly – see Figure 7. Any inquiry concerning this communication or earlier communications from the examiner should be directed to Michael Opsasnick, telephone number (571)272-7623, who is available Monday-Friday, 9am-5pm. If attempts to reach the examiner by telephone are unsuccessful, the examiner's supervisor, Mr. Richemond Dorvil, can be reached at (571)272-7602. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of an application may be obtained from the Patent Application Information Retrieval (PAIR) system. Status information for published applications may be obtained from either Private PAIR or Public PAIR. Status information for unpublished applications is available through Private PAIR only. For more information about the PAIR system, see http://pair-direct.uspto.gov. Should you have questions on access to the Private PAIR system, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). /Michael N Opsasnick/Primary Examiner, Art Unit 2658 02/04/2026
Read full office action

Prosecution Timeline

Jul 30, 2024
Application Filed
Feb 04, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602554
SYSTEMS AND METHODS FOR PRODUCING RELIABLE TRANSLATION IN NEAR REAL-TIME
2y 5m to grant Granted Apr 14, 2026
Patent 12592246
SYSTEM AND METHOD FOR EXTRACTING HIDDEN CUES IN INTERACTIVE COMMUNICATIONS
2y 5m to grant Granted Mar 31, 2026
Patent 12586580
System For Recognizing and Responding to Environmental Noises
2y 5m to grant Granted Mar 24, 2026
Patent 12579995
Automatic Speech Recognition Accuracy With Multimodal Embeddings Search
2y 5m to grant Granted Mar 17, 2026
Patent 12567432
VOICE SIGNAL ESTIMATION METHOD AND APPARATUS USING ATTENTION MECHANISM
2y 5m to grant Granted Mar 03, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
92%
With Interview (+10.5%)
3y 3m
Median Time to Grant
Low
PTA Risk
Based on 900 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month