Prosecution Insights
Last updated: April 17, 2026
Application No. 18/338,745

HAPTIC-FEEDBACK BILATERAL HUMAN-MACHINE INTERACTION METHOD BASED ON REMOTE DIGITAL INTERACTION

Non-Final OA §103
Filed
Jun 21, 2023
Examiner
ANDERSON, BRODERICK C
Art Unit
2178
Tech Center
2100 — Computer Architecture & Software
Assignee
unknown
OA Round
3 (Non-Final)
74%
Grant Probability
Favorable
3-4
OA Rounds
3y 1m
To Grant
93%
With Interview

Examiner Intelligence

Grants 74% — above average
74%
Career Allow Rate
190 granted / 258 resolved
+18.6% vs TC avg
Strong +19% interview lift
Without
With
+19.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 1m
Avg Prosecution
20 currently pending
Career history
278
Total Applications
across all art units

Statute-Specific Performance

§101
9.8%
-30.2% vs TC avg
§103
60.1%
+20.1% vs TC avg
§102
18.4%
-21.6% vs TC avg
§112
7.1%
-32.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 258 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . This action is in response to the Request for Continued Examination filed 12/22/2025. Priority Receipt is acknowledged of certified copies of papers required by 37 CFR 1.55. Response to Request for Continued Examination A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/2025 has been entered. The response filed on 12/22/2025 has been entered and made of record. Claim 1 is amended. Claims 1-5 are pending. The previous rejections made to claims 1-5 under 35 USC 103 under of claims 1-5 under 35 USC 103 under Mistry et al in view of Tsunoda, Brown et al, You et al, Aimone et al, and Pance et al are maintained, but have been updated in response to the amendment. Drawings The drawings filed 11/18/2024 were accepted. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1-5 are rejected under 35 U.S.C. 103 as being unpatentable over Mistry et al (US 20140143784 A1; filed 8/30/2013) in view of Tsunoda (US 5027407 A; filed 1/23/1990), Brown et al (US 20040083101 A1, filed 10/23/2002), You et al (US 20170344116 A1; filed 12/1/2015), Aimone et al (US 20220132194 A1; filed 1/7/2022), and Pance et al (US 20130154814 A1; filed 2/19/2013). With regards to claim 1, Mistry et al discloses A haptic-feedback bilateral human-machine interaction method that enables the physicalization of remote digital interaction…, and comprises three input methods S1 (Mistry et al, paragraph 120: “Touch-sensitive areas may detect any suitable contact”), S2 (Mistry et al, paragraph 113: “the device's sensor may capture the user's hand/arm/fingers in the angle of view of the sensor while performing a gesture to be captured by the same or other sensors (e.g. a gesture selecting an object in the angle of view of the device, such as, for example, pinching, tapping, or pulling toward or pushing away)”), and S3 (Mistry et al, paragraph 165: “a user may interact with the device via a variety of input mechanisms or types including, for example, … a speech interface (e.g. including voice input and speech recognition for applications including text input, communication, or searching)”), and one output and interaction implementation method S4 (Mistry et al, paragraph 97: “provide a user with haptic feedback (e.g. a tactile click)”), wherein specifically comprises: S1. Touch Recognition S1.1. To start, users input touch, gestures, touch, slide, swipe, tap, pat, or other forms of physical inputs and movements on a touch-responsive surface that consists of electric-inducted materials (Mistry et al, paragraph 35: “FIGS. 95A-95D illustrate example user touch input to a device”); S1.2. Physical inputs, captured as the pressure-proportional analogue signals, are then being converted into electric signals in the forms of changes of capacitance, resistance, or magnetics (Mistry et al, paragraph 79: “Touch-sensitive layer 210 may be composed of any suitable material and be of any suitable type, such as for example resistive, surface acoustic wave, capacitive (including mutual capacitive or self-capacitive), infrared, optical, dispersive, or any other suitable type.”); S1.3. The converted electrical signals are further processed and converted into a series of two- or three-dimensional coordinate data (Mistry et al, paragraph 168: “accept user touch input and allow the device to determine the x-y coordinates of a user's touch); S1.4. A CPU processor analyzes, parse, and then map the series of electrical signals and coordinate data to generate a series of interaction commands (Mistry et al, paragraph 168: “Touch gestures (described herein) may include multi-directional swiping or dragging, pinching, double-tapping, pressing or pushing on the display (which may cause a physical movement of the display in an upward or downward direction), long pressing, multi-touch (e.g. the use of multiple fingers or implements for touch or gesturing anywhere on the touch-sensitive interface), or rotational touch gestures.”); S1.5. Interaction commands are transmitted to a Central Processing Unit (CPU) through a built-in integrated circuit (Mistry et al, paragraph 114: “FIG. 18B illustrates the optical sensor integrated circuit 1850 on or in the optical sensor module 1860, which also houses optical sensor 1855. Communication between the main printed circuit board of device 1830 and electronics in camera module 1860 occur via flexible printed circuit 1845;” paragraph 239: “a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs)… where appropriate”)… S2. Gesture Recognition S2.1. To start: users input gestures, physical movements, touch, slide, swipe, tap, pat, or other forms of interaction inputs within the gesture sensing area (Mistry et al, paragraph 113: “the device's sensor may capture the user's hand/arm/fingers in the angle of view of the sensor while performing a gesture to be captured by the same or other sensors (e.g. a gesture selecting an object in the angle of view of the device, such as, for example, pinching, tapping, or pulling toward or pushing away)”); S2.2. inductive sensing units in a gesture recognition module, involving camera vision recognition system, or infrared, LiDAR, or proximity sensor, or magnetic sensor, or ultrasonic motion sensor, continuously capture the three-dimensional positions of the dynamic gestures (Mistry et al, paragraph 138: “Gestures may be of any suitable type, may be detected by any suitable sensors (e.g. inertial sensors, touch sensors, cameras, or depth sensors), and may be associated with any suitable functionality. For example, one or more depth sensors may be used in conjunction with one or more cameras to capture a gesture”), and convert them into corresponding 3D coordinate locations and data series; (Mistry et al, paragraph 116: “Pattern-based gesture detectors 1938 evaluate sensor input against a predetermined library of gesture patterns 1940, such as for example patterns determined by empirical evaluation of sensor output when a gesture is performed”); S2.3. CPU processor analyzes, parse, and then map the series of dynamic gestures and coordinate data to generate a series of interaction commands (Mistry et al, paragraph 116: “One or more gesture priority decoders 1948 evaluate output from gesture detectors, locked state detectors, or both to determine which, if any, of the detected gestures should be utilized to provide functionality to a particular application or system-level process”); S2.4. Interaction commands are transmitted to the CPU through a built-in integrated circuit (Mistry et al, paragraph 114: “FIG. 18B illustrates the optical sensor integrated circuit 1850 on or in the optical sensor module 1860, which also houses optical sensor 1855. Communication between the main printed circuit board of device 1830 and electronics in camera module 1860 occur via flexible printed circuit 1845;” paragraph 239: “a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs)… where appropriate”)… S3. Voice Recognition S3.1. Users speak and input audio signals, or a CPU processor acquires audio sources through a wireless communication module (Mistry et al, paragraph 165: “a user may interact with the device via a variety of input mechanisms or types including, for example, the outer ring, touch-sensitive interfaces (e.g. the touch-sensitive layer), gestures performed by the user (described herein), or a speech interface (e.g. including voice input and speech recognition for applications including text input, communication, or searching)”); … S3.5 Interaction commands are transmitted to the CPU through a built-in integrated circuit (Mistry et al, paragraph 239: “a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate;” Since the invention is implemented on a computer-readable medium, it must transmit the signals for the feedback)… S4.2. The haptic, tactile and kinesthetic feedback and representation signals are downloaded to the CPU and storage unit via wireless communication modules, or are transmitted to through a built-in integrated circuit… (Mistry et al, paragraph 239: “a computer-readable non-transitory storage medium or media may include one or more semiconductor-based or other integrated circuits (ICs) (such, as for example, field-programmable gate arrays (FPGAs) or application-specific ICs (ASICs)), hard disk drives (HDDs), hybrid hard drives (HHDs), optical discs, optical disc drives (ODDs), magneto-optical discs, magneto-optical drives, floppy diskettes, floppy disk drives (FDDs), magnetic tapes, solid-state drives (SSDs), RAM-drives, SECURE DIGITAL cards or drives, any other suitable computer-readable non-transitory storage media, or any suitable combination of two or more of these, where appropriate;” Since the invention is implemented on a computer-readable medium, it must transmit the signals for the feedback); S4.3. The CPU processes and interprets the haptic, tactile and kinesthetic feedback signals (including but not limited to vibration frequencies, vibration intensities, vibration intervals)… into output signals, and to be applied to an array of haptic and kinesthetic feedback actuators (Mistry et al, paragraph 123: “components of the device may also provide haptic feedback to the user. For example, one or more rings, surfaces, or bands may vibrate, produce light, or produce audio;” the vibration frequencies, intensities, and intervals are present in Mistry since a vibration will have these values associated with it); S4.4. The haptic and kinesthetic feedback signals provide haptic and kinesthetic stimulation through the activation of the haptic and kinesthetic feedback actuators within a wearable device (Mistry et al, paragraph 123: “components of the device may also provide haptic feedback to the user. For example, one or more rings, surfaces, or bands may vibrate, produce light, or produce audio”); S4.5. Since the wearable device is in direct contact with human skin, haptic and kinesthetic stimulation can be directly perceived by the user (Mistry et al, paragraph 123: “haptic feedback” Fig. 15: the device is the watch 1500, which is in contact with the user’s wrist); S4.6. Users recognize corresponding touch, gestures, activities, or any other forms of physical interaction by perceiving different vibration frequencies, vibration intensities, vibrational interval times and the sequence of vibrations between modules, achieving the effect of physicalizing digital interaction, achieving the effect of physicalization of the digital interactions (Mistry et al, paragraph 123: “components of the device may also provide haptic feedback to the user. For example, one or more rings, surfaces, or bands may vibrate, produce light, or produce audio;” the user is human (as shown in fig. 11 and 15) and thus is generally capable of recognizing vibrations from a worn device), perceiving the physical inputs from other users (Mistry et al, Fig. 11 and 15: the figures show that the user is a human, which is able to perceive physical inputs (e.g. gestures or voice) from other users/humans); and concluding the interaction process (Mistry et al, Fig. 11 and 15: the figures show that the user is a human, thus the interaction process with the human will necessarily end at some point). However, Mistry et al does not disclose interaction through context-aware semantic reasoning and mapping… are uploaded to a haptic, tactile, and kinesthetic-based semantic database, which is on a cloud, or alternatively stored within a storage unit in a device control system; Physiological information is also captured by biosensors, which includes but not limited to Photoplethysmography (PPG) heart rated sensor, or electroencephalogram (EEG) brain wave sensors, and synchronized to the haptic, tactile, and kinesthetic-based semantic database to enhance the recognition of the interaction commands, contexts, and user status and emotions; S1.6. The semantic database then translates and maps the interaction commands to the corresponding haptic, tactile, or kinesthetic representations, with the contextual semantic recognition of emotions, feelings and actions; … are uploaded to the haptic, tactile, and kinesthetic-based semantic database, which is on the cloud, or alternatively stored within the storage unit in the device control system… Physiological information is also captured by biosensors, which includes but not limited to PPG heart rated sensor, or EEG brain wave sensors, and is synchronized to the haptic, tactile, and kinesthetic-based semantic database to enhance the recognition of the interaction commands, contexts, and user status and emotions… S2.5. The semantic database then translates and maps the interaction commands to the corresponding haptic, tactile, or kinesthetic representations, with the contextual semantic recognition of emotions, feelings and actions;… S3.2. analogue signals of the pre-processed audio are then filtered and converted into digital audio signals by an analogue convertor; S3.3. The converted digital audio signals are parsed and translated into text inputs, which are then intercepted as context, instructions, or commands by the processor, as well as being processed through contextual semantic recognition of emotions, feelings and actions; S3.4. The processor analyzes, parse, and map the text inputs and the contextual semantic information to a series of interaction commands; …and are uploaded to the haptic, tactile, and kinesthetic-based semantic database, which is on the cloud, or alternatively stored within the storage unit in the device control system; Physiological information is also captured by biosensors, which includes but not limited to PPG heart rated sensor, or EEG brain wave sensors, and is synchronized to the haptic, tactile, and kinesthetic-based semantic database to enhance the recognition of the interaction commands, contexts, and user status and emotions; S3.6 The semantic database then translates and maps the interaction commands to the corresponding haptic, tactile, or kinesthetic representations, with the contextual semantic recognition of emotions, feelings and actions… S4. Interaction 54.1. To start: The haptic, tactile, and kinesthetic-based semantic database translates and maps the interaction commands to the corresponding haptic, tactile, or kinesthetic representations, with the contextual semantic recognition of emotions, feelings and actions… signals (including but not limited to… vibration sequence in between an array of vibrational actuators, kinesthetic movements). Tsunoda teaches S3.2. analogue signals of the pre-processed audio are then filtered and converted into digital audio signals by an analogue convertor (Tsunoda, Col 2, lines 52-57: "Acoustic analysis section 21 extracts digital feature data from the input voice signal using a combination of a filter bank including a serially connected rectifier, an analog band-pass filter and a low-pass filter and an analog-to-digital (A/D) converter for converting the output from the filter bank into digital data"). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Mistry, and Tsunoda such that the audio input is filtered and converted to a digital signal. This would have enabled the invention to process the speech audio for recognition digitally (Tsunoda, col 2, lines 5-10: “a recognition computing section for performing a similarity computation between the time-serially normalized feature data stored in the second memory section and reference pattern data, and a determining section for recognizing and determining the input signal based on the output data from the recognition computing section.”). Brown et al teaches S3.3. The converted digital audio signals are parsed and translated into text inputs (Brown et al, paragraph 78: “Voice to text converter 245, in turn, sends the textual form of the command and words surrounding the command (parameters) to command processor 250 for processing”), which are then intercepted as context, instructions, or commands by the CPU processor (Brown et al, paragraph 45: “When a command is identified, command filter 215 sends the analog speech to voice to text converter 245 which converts the command and words surrounding the command into a textual form”), as well as being processed through contextual semantic recognition of emotions, feelings and actions (Brown et al, abstract: “Voice inflections and any emotional stress present in the voices of the users can also be detected and added to the collected information”); S3.4. The CPU processor analyzes, parse, and map the text inputs and the contextual semantic information to a series of interaction commands (Brown et al, paragraph 92: “If a command was received, decision 430 branches to “yes” branch 432 whereupon the personal telephony recorder processes the received command (predefined process 435, see FIG. 20 for processing details);” Fig. 20-21 show flow charts for mapping the command inputs to actions). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Mistry et al, Tsunoda, and Brown et al such that the inputs can include voice commands from an audio input. This would have enabled the invention to provide more “sophisticated analysis of the recorded data for deeper contextual meaning of the conversations” (Brown et al, abstract). You et al teaches enables… interaction through context-aware semantic reasoning and mapping (You et al, paragraph 25: “a semantic aware conversion table or other mapping may be used.”)… are uploaded to the haptic, tactile, and kinesthetic-based semantic database (You et al, paragraph 25: “the multimodality semantic mixer 230 converts the property data into a format that is able to be rendered on the haptic rendering device. In converting the property data to a multimodal data structure 250, a semantic aware conversion table or other mapping may be used.”), which is on the cloud, or alternatively stored within the storage unit in the device control system (You et al, paragraph 19: “Any or all of the servers 112, 114, 115, 122, 124, 125 may individually, in groups or all together form and provide information for producing haptic output at a user device 126, 128, 150. The servers may form a server system, e.g. a cloud.”)… S1.6. The semantic database then translates and maps the interaction commands to the corresponding haptic, tactile, or kinesthetic representations, with the contextual semantic recognition of… actions (You et al, paragraph 25: “In converting the property data to a multimodal data structure 250, a semantic aware conversion table or other mapping may be used.”)… S2.5. The semantic database then translates and maps the interaction commands to the corresponding haptic, tactile, or kinesthetic representations, with the contextual semantic recognition of… actions (You et al, paragraph 25: “In converting the property data to a multimodal data structure 250, a semantic aware conversion table or other mapping may be used.”)… S3.6 The semantic database then translates and maps the interaction commands to the corresponding haptic, tactile, or kinesthetic representations, with the contextual semantic recognition of… actions (You et al, paragraph 25: “In converting the property data to a multimodal data structure 250, a semantic aware conversion table or other mapping may be used.”)… S4. Interaction 54.1. To start: The haptic, tactile, and kinesthetic-based semantic database translates and maps the interaction commands to the corresponding haptic, tactile, or kinesthetic representations, with the contextual semantic recognition of… actions (You et al, paragraph 25: “You et al, paragraph 25: “In converting the property data to a multimodal data structure 250, a semantic aware conversion table or other mapping may be used.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Mistry et al, Tsunoda, Brown et al, and You et al such that the invention includes a semantic database for mapping the input to types of feedback. This would have enabled the invention to improve the function of haptic output devices (You et al, paragraph 2: “solutions that improve the function of haptic output devices”). Aimone et al teaches physiological information is also captured by biosensors, which includes but not limited to PPG heart rated sensor, or EEG brain wave sensors (Aimone et al, paragraph 117: “Non-limiting features of this headset may include: an unobtrusive soft-band headset that can be confidently worn in public; and differentiation from prior art consumer EEG solutions through the use of 3, or 4, or more electrodes (rather than one or two). This advancement may enable: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions)”), and synchronized to the haptic, tactile, and kinesthetic-based semantic database to enhance the recognition of the interaction commands, contexts, and user status and emotions (Aimone et al, paragraph 404: “Feedback through a haptic or tactile feedback device, such as a vibrotactile device may also be provided or modulated, such as in a mobile phone, pager, vibrator, or other vibrating element in a device. Vibrotactile feedback can be directly proportional to brainwaves. For example, the more over the threshold of a certain band or state, the more intense the vibration, as controlled by the rules engine;” paragraph 406: “Like emotion can be used to annotate text message, so can these communications be shared through vibrotactile feedback (e.g. an “I'm thinking about you” buzz)”)… contextual semantic recognition of emotions, feelings (Aimone et al, paragraph 94: “the analyzer (a) accesses the brain-state data, (b) analyzes the brain-state data, (c) maps the brain-state data into one or more of a plurality of moods or emotional states”)… physiological information is also captured by biosensors, which includes but not limited to PPG heart rated sensor, or EEG brain wave sensors (Aimone et al, paragraph 117: “Non-limiting features of this headset may include: an unobtrusive soft-band headset that can be confidently worn in public; and differentiation from prior art consumer EEG solutions through the use of 3, or 4, or more electrodes (rather than one or two). This advancement may enable: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions)”), and synchronized to the haptic, tactile, and kinesthetic-based semantic database to enhance the recognition of the interaction commands, contexts, and user status and emotions (Aimone et al, paragraph 404: “Feedback through a haptic or tactile feedback device, such as a vibrotactile device may also be provided or modulated, such as in a mobile phone, pager, vibrator, or other vibrating element in a device. Vibrotactile feedback can be directly proportional to brainwaves. For example, the more over the threshold of a certain band or state, the more intense the vibration, as controlled by the rules engine;” paragraph 406: “Like emotion can be used to annotate text message, so can these communications be shared through vibrotactile feedback (e.g. an “I'm thinking about you” buzz)”)… contextual semantic recognition of emotions, feelings (Aimone et al, paragraph 94: “the analyzer (a) accesses the brain-state data, (b) analyzes the brain-state data, (c) maps the brain-state data into one or more of a plurality of moods or emotional states”)… physiological information is also captured by biosensors, which includes but not limited to PPG heart rated sensor, or EEG brain wave sensors (Aimone et al, paragraph 117: “Non-limiting features of this headset may include: an unobtrusive soft-band headset that can be confidently worn in public; and differentiation from prior art consumer EEG solutions through the use of 3, or 4, or more electrodes (rather than one or two). This advancement may enable: estimation of hemispheric asymmetries and thus facilitate measurements of emotional valence (e.g. positive vs. negative emotions)”), and synchronized to the haptic, tactile, and kinesthetic-based semantic database to enhance the recognition of the interaction commands, contexts, and user status and emotions (Aimone et al, paragraph 404: “Feedback through a haptic or tactile feedback device, such as a vibrotactile device may also be provided or modulated, such as in a mobile phone, pager, vibrator, or other vibrating element in a device. Vibrotactile feedback can be directly proportional to brainwaves. For example, the more over the threshold of a certain band or state, the more intense the vibration, as controlled by the rules engine;” paragraph 406: “Like emotion can be used to annotate text message, so can these communications be shared through vibrotactile feedback (e.g. an “I'm thinking about you” buzz)”)… contextual semantic recognition of emotions, feelings (Aimone et al, paragraph 94: “the analyzer (a) accesses the brain-state data, (b) analyzes the brain-state data, (c) maps the brain-state data into one or more of a plurality of moods or emotional states”)… signals (including but not limited to… kinesthetic movements) (Aimone et al, paragraph 409: “Additional feedback modalities may include:… moving motors, actuators or solonoids”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Mistry et al, Tsunoda, Brown et al, You et al, and Aimone et al such that a user’s emotions are used to add context to the inputs and to the mapping of inputs with outputs. This would have enabled the device to use the feedback to represent the user’s emotions (Aimone et al, paragraph 405: “Emotions or liking, disliking or anger could be represented as such. For example, if one is angry, the state could be transmitted to a tactile actuator, and the movements of the actuator could become more violent, stronger or faster. Likewise, calming could be represented as such as well, and could be communicated.”). Pance et al teaches signals (including but not limited to… vibration sequence in between an array of vibrational actuators…) (Pance et al, abstract: “The system further includes a controller to activate a first actuator of the plurality of actuators to induce a first vibration at a selected input location of the input surface and to activate one or more additional actuators to induce at least a second vibration”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Mistry et al, Tsunoda, Brown et al, You et al, Aimone et al, and Pance et al such that a plurality of haptic actuators are used in sequence. This would have enabled the invention to provide localized haptic feedback based on the input (Pance et al, paragraph 5: “providing improved localization of haptic feedback provided through an input surface”). With regards to claim 2, which depends on claim 1, Mistry et al discloses the CPU processors used in the S1, S2, S3, and S4 (Mistry et al, abstract: “an apparatus includes a wearable computing device that includes one or more processors and a memory. The memory is coupled to the processors and includes instructions executable by the processors.”) acquire and process audio sources through a wireless communication module (Mistry et al, paragraph 89: “the display module may include a camera or other optical sensor, microphone, or antenna;” paragraph 115: “Sensors may communicate with each other and with processing and memory components through any suitable wired or wireless connections, such as for example direct electrical connection, NFC, or BLUETOOTH.”). With regards to claim 3, which depends on claim 1, Mistry et al discloses wherein the haptic-feedback bilateral human-machine interaction method based on remote digital interaction is equipped with a perceivable tangible user interface (Mistry et al, paragraph 136: “the system may power up the display and enable the touch screen for further interactions”) and a human-machine interaction module (Mistry et al, Fig. 19: Sensor Hub 19B; the human-machine interaction module can be interpreted in several ways, but the sensor hub receives raw sensor data and outputs gesture data to the application processor 19C). With regards to claim 4, which depends on claim 3, Mistry et al discloses wherein the perceivable tangible user interface is a user interaction interface controlled by a control unit (Mistry et al, paragraph 120: “touch-sensitive areas may comprise at least a portion of a device's display, ring, or band. Like for other sensors, in particular embodiments touch-sensitive areas may be activated or deactivated for example based on context, power considerations, or user settings;” the control unit can be interpreted as the hub 19B or application processor 19C)… including singular or multiple vibrational stimulation from either singular actuator module, or a series of actuator modules in array arrangement, and the vibration time, vibration interval, vibration sequence, and other parameters (Mistry et al, paragraph 123: “components of the device may also provide haptic feedback to the user. For example, one or more rings, surfaces, or bands may vibrate, produce light, or produce audio;” the vibration frequencies, intensities, and intervals are present in Mistry since a vibration will have these values associated with it). However, Mistry et al does not disclose activate a single actuator or actuators in array arrangement to provide haptic, tactile and kinesthetic stimulations, through mapping of haptic, tactile, and kinesthetic feedback signals from the tactile and kinesthetic feedback semantic database, and the translation of haptic, tactile, and kinesthetic representations. You et al teaches activate a single actuator or actuators in array arrangement to provide haptic, tactile and kinesthetic stimulations, through mapping of haptic, tactile, and kinesthetic feedback signals from the tactile and kinesthetic feedback semantic database, and the translation of haptic, tactile, and kinesthetic representations (You et al, paragraph 25: “the multimodality semantic mixer 230 converts the property data into a format that is able to be rendered on the haptic rendering device. In converting the property data to a multimodal data structure 250, a semantic aware conversion table or other mapping may be used.”). It would have been obvious to a person of ordinary skill in the art before the effective filing date to have combined Mistry et al, Tsunoda, Brown et al, You et al, Aimone et al, and Pance et al such that the invention includes a semantic database for mapping the input to types of feedback. This would have enabled the invention to improve the function of haptic output devices (You et al, paragraph 2: “solutions that improve the function of haptic output devices”). With regards to claim 5, which depends on claim 3, Mistry et al discloses the touch panel monitors the user's input gestures (Mistry et al, paragraph 120: “Touch-sensitive areas may detect any suitable contact, such as swipes, taps, contact at one or more particular points or with one or more particular areas, or multi-touch contact”) in real-time (Mistry et al, paragraph 231: “one or more computer systems 13700 may perform in real time or in batch mode one or more steps of one or more methods described or illustrated herein”) and transmits the acquired gesture data to a control unit (Mistry et al, Fig. 19: the touch panel is among the touch sensors in Sensors 19A and outputs the raw sensor data to the sensor hub to be converted into gesture data 1946, which is then used by the application processor 19C to control applications; the control unit can be interpreted as the sensor hub 19B, or a portion of it). Response to Arguments Applicant's arguments filed 12/22/2025 regarding the teaching of context-aware semantic reasoning and mapping have been fully considered but they are not persuasive (This argument is spread among sections A, B, D, E, and G of remarks). Applicant argues that the rejection does not teach the claims as amended, which now include “enables… interaction through context-aware semantic reasoning and mapping” in the preamble of claim 1. Examiner disagrees, and argues that the combination of Mistry et al and You et al is sufficient to teach this limitation. As stated in the rejection to claim 1 above, You et al teaches a semantic conversion/mapping between an interactive element and haptic feedback (You et al, paragraph 25: “the multimodality semantic mixer 230 converts the property data into a format that is able to be rendered on the haptic rendering device. In converting the property data to a multimodal data structure 250, a semantic aware conversion table or other mapping may be used.”). When combined with Mistry et al, a reasonable combination would result in the inputs and haptic feedback taught by Mistry et al being connected using the semantic aware conversion table taught by You et al. Thus the argument is not persuasive. Note: Large portions of the remarks are dedicated to arguing that the other art fails to teach this limitation. Examiner agrees, but continues to argue that You et al teaches it and is combinable with the other art. Applicant's arguments with regards to the art combinations have been fully considered but they are not persuasive. Applicant argues that there is no motivation to combine the cited art in a way that would teach semantic reasoning or contextual mapping (Remarks, sections C, F). Applicant merely argues that since no reference teaches a “semantic database that maps multimodal contextual features to composed tactile language and renders those representations with contextual meaning” (remarks, p. 10 – argument C, p. 12 – argument F). Examiner disagrees, and, as argued above and stated in the rejection to claim 1 above, argues that the combination of Mistry et al and You et al teach the semantic mapping between input, context, and tactile feedback. Thus the argument is not persuasive. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to BRODERICK C ANDERSON whose telephone number is (313)446-6566. The examiner can normally be reached Monday-Tuesday, Thursday-Saturday 9-5 PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Stephen Hong can be reached on 5712724124. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /B.C.A/Examiner, Art Unit 2173 /STEPHEN S HONG/Supervisory Patent Examiner, Art Unit 2178
Read full office action

Prosecution Timeline

Jun 21, 2023
Application Filed
Aug 23, 2024
Non-Final Rejection — §103
Nov 18, 2024
Response Filed
Dec 04, 2024
Final Rejection — §103
Jul 21, 2025
Response after Non-Final Action
Dec 22, 2025
Request for Continued Examination
Jan 21, 2026
Response after Non-Final Action
Jan 24, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12572199
METHOD AND APPARATUS FOR GENERATING GROUP EYE MOVEMENT TRAJECTORY, COMPUTING DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Mar 10, 2026
Patent 12564337
RECURRENT NEURAL NETWORK FOR TUMOR MOVEMENT PREDICTION
2y 5m to grant Granted Mar 03, 2026
Patent 12566821
GENERATIVE SYSTEM FOR WRITING ENTITY RECOMMENDATIONS
2y 5m to grant Granted Mar 03, 2026
Patent 12561863
CREATING AND MODIFYING CIRCULAR ARCS WHILE MAINTAINING ARC QUALITIES WITHIN A DIGITAL DESIGN DOCUMENT
2y 5m to grant Granted Feb 24, 2026
Patent 12547888
METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM FOR TRAINING IMAGE SEMANTIC SEGMENTATION NETWORK
2y 5m to grant Granted Feb 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
74%
Grant Probability
93%
With Interview (+19.1%)
3y 1m
Median Time to Grant
High
PTA Risk
Based on 258 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in for Full Analysis

Enter your email to receive a magic link. No password needed.

Free tier: 3 strategy analyses per month