Prosecution Insights
Last updated: April 19, 2026
Application No. 18/293,697

TACTILE PRESENTATION DEVICE

Non-Final OA §103
Filed
Jan 30, 2024
Examiner
AYAD, MARIA S
Art Unit
2172
Tech Center
2100 — Computer Architecture & Software
Assignee
Toyoda Gosei Co., Ltd.
OA Round
1 (Non-Final)
33%
Grant Probability
At Risk
1-2
OA Rounds
3y 10m
To Grant
50%
With Interview

Examiner Intelligence

Grants only 33% of cases
33%
Career Allow Rate
53 granted / 159 resolved
-21.7% vs TC avg
Strong +17% interview lift
Without
With
+17.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
36 currently pending
Career history
195
Total Applications
across all art units

Statute-Specific Performance

§101
11.9%
-28.1% vs TC avg
§103
54.2%
+14.2% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
14.1%
-25.9% vs TC avg
Black line = Tech Center average estimate • Based on career data from 159 resolved cases

Office Action

§103
DETAILED ACTION This action is responsive to the application filed on 1/30/2024 and the preliminary amendment filed on the same day. Claims 1-5 are pending in this application. Claim 1 is an independent claim. Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 1/30/2024 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Claim Objections Claims 1, 2, and 5 are objected to because of the following informalities: Claim 1, replace … apparatus, comprising: … with … apparatus, the apparatus comprising: … Claim 2, replace on the last line … to different appearance … with … to a different appearance … Claim 5, replace on the last line … tactile feedback tactile feedback … with … tactile feedback … Appropriate correction is required. Claim Interpretation The claims in this application are given their broadest reasonable interpretation using the plain meaning of the claim language in light of the specification as it would be understood by one of ordinary skill in the art. The broadest reasonable interpretation of a claim element (also commonly referred to as a claim limitation) is limited by the description in the specification when 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is invoked. As explained in MPEP § 2181, subsection I, claim limitations that meet the following three-prong test will be interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph: (A) the claim limitation uses the term “means” or “step” or a term used as a substitute for “means” that is a generic placeholder (also called a nonce term or a non-structural term having no specific structural meaning) for performing the claimed function; (B) the term “means” or “step” or the generic placeholder is modified by functional language, typically, but not always linked by the transition word “for” (e.g., “means for”) or another linking word or phrase, such as “configured to” or “so that”; and (C) the term “means” or “step” or the generic placeholder is not modified by sufficient structure, material, or acts for performing the claimed function. Use of the word “means” (or “step”) in a claim with functional language creates a rebuttable presumption that the claim limitation is to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites sufficient structure, material, or acts to entirely perform the recited function. Absence of the word “means” (or “step”) in a claim creates a rebuttable presumption that the claim limitation is not to be treated in accordance with 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. The presumption that the claim limitation is not interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, is rebutted when the claim limitation recites function without reciting sufficient structure, material or acts to entirely perform the recited function. Claim limitations in this application that use the word “means” (or “step”) are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. Conversely, claim limitations in this application that do not use the word “means” (or “step”) are not being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, except as otherwise indicated in an Office action. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: “a display unit that displays …”, “a storage unit that stores …”, “a vibration actuator that provides a user with a tactile sensation …”, “a waveform editing unit that edits the waveform information”, “the waveform editing unit updates the waveform information …” in claim 1, “a display control unit that changes an appearance …” in claim 2, “the storage unit stores accumulated information” in claim 3, “the display unit displays an editing icon ”, and “the waveform editing unit edits …” in claim 4, and “a comment processing unit for inputting and storing a user's comment on the provided tactile sensation” in claim 5. Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. According to Applicant’s disclosure: display unit is a display device such as head-mounted display, AR glasses, non-head-worn display devices such as the display of a tablet terminal or a mobile terminal or an installation-type display, and a projector that forms an image on a space or an object [see e.g. [0023] of the specifications]. storage unit is e.g. a nonvolatile memory on a local device, server, or cloud storage [see e.g. [0028]as well as [0073] of the specifications]. vibrator actuator covers sheet-shaped dielectric elastomer actuators (DEA)s or equivalents [see e.g. [0016]-[0021] as well as [0070] of the specifications]. waveform editing unit is part of a control unit which can be 1) one or more processors that operate according to a computer program (software); 2) one or more dedicated hardware circuits (application specific integrated circuits: ASIC) that execute at least part of various processes; or 3) a combination thereof. The processors include, for example, CPUs. [see e.g. [0034] as well as [0038] of the specifications]. comment processing unit is a part of the display unit [see e.g. [0011] as well as [0068]-[0069] of the specifications; see also the portions of the specifications enlisted for the display unit]. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Examiner Comments In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claims 1 and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Yokoyama, US Patent No. 11,755,117 B2 (hereinafter as Yokoyama) in view of Khare et al., US Patent No. 10,955,922 B2 (hereinafter as Khare). Regarding independent claim 1, Yokoyama teaches a tactile feedback apparatus [see figs. 3-5 all representing a device providing tactile feedback], comprising: a display unit that displays a visual object [note the touch sensor display 22 shown in figs. 3 and 4; note the object displayed in fig. 4; see also fig. 11, S3]; a storage unit that stores waveform information expressing a tactile sensation of the visual object [note the storage unit 43 shown in fig. 5 and especially note S6 of fig. 11 indicating the acquiring of a haptic control signal from the storage unit corresponding to certain data detected by touching a displayed image, as per S3-S6 of fig. 11]; and a vibration actuator that provides a user with a tactile sensation based on vibration [see the piezoelectric actuator 23 shown in fig. 3 and note in S8 performing haptic presentation based on a haptic control signal; see also col. 4, lines 39-42]; wherein the vibration actuator vibrates based on the waveform information in a case in which a targeted area of the user's body for the tactile feedback by the vibration actuator comes into contact with the visual object [again, see the piezoelectric actuator 23 shown in fig. 3 and note in S8 performing haptic presentation based on a haptic control signal; note from the earlier steps of fig. 11 that the haptic control signal corresponds to certain data detected by touching a displayed image, as per S5 and as described in col. 11, lines 46-50]. Yokoyama does not explicitly teach a waveform editing unit that edits the waveform information. Neither does it explicitly teach that the waveform editing unit updates the waveform information stored in the storage unit based on an operation associated with completion of the editing. Khare teaches a waveform editing unit that edits the waveform information, wherein the waveform editing unit updates the waveform information stored in a storage unit based on an operation associated with completion of the editing [note in col. 10, lines 7-16 indicating editing a haptic signal as shown in fig. 4A and a saving operation that is triggered by a user clicking an interactive icon 416, as shown in fig. 4B; note the refining of a haptic signal in col. 1, lines 47-52; see also col. 4, lines 26-41and col. 8, lines 18-33 describing the composer program 110 for editing the haptic signal and its operation, respectively]. It would have been obvious to one of ordinary skill in the art having the teachings of Yokoyama and Khare, before the effective filing date of the claimed invention, to modify the apparatus taught by Yokoyama to explicitly specify a waveform editing unit that edits the waveform information and that the waveform editing unit updates the waveform information stored in the storage unit based on an operation associated with completion of the editing, as per the teachings of Khare. The motivation for this obvious combination would be to allow for haptic signal refinements based on user selections and preferences, which would enable the generation of unique haptic signals as desired, as taught by Khare [see e.g. col. 8, lines 18-19; see also col. 2, lines 37-79]. Regarding claim 5, the rejection of independent claim 1 is fully incorporated. Khare further teaches a comment processing unit for inputting and storing a user's comment on the provided tactile sensation [see col. 10, lines 21-27 indicating an interactive text field for inputting and storing a user’s chosen text that corresponds to the haptic signal created]. It would have been obvious to one of ordinary skill in the art having the teachings of Yokoyama and Khare, before the effective filing date of the claimed invention, to further modify the apparatus taught by Yokoyama and modified by Khare to explicitly specify a comment processing unit for inputting and storing a user's comment on the provided tactile sensation, as per the teachings of Khare. The motivation for this obvious combination would be to allow a user to attach a name or other notes to a haptic signal, as taught by Khare [see e.g. col. 10, lines 21-27], which would enable referring back to the created signal and reusing or refining it more easily. Claim 2 is rejected under 35 U.S.C. 103 as being unpatentable over Yokoyama in view of Khare, as applied to claim 1 above, and further in view of Bochereau et al., US Patent No. 10,852,827 B1 (hereinafter as Bochereau). Regarding claim 2, the rejection of independent claim 1 is fully incorporated. Yokoyama further teaches a display control unit that changes an appearance of the visual object [see the image display unit 45 in fig. 5; note the changing image in col. 9, line 54 and in fig. 9]. Although Yokoyama further teaches a change in the waveform as an image in changed [see e.g. figs. 9 (especially compare the waveform with ID=1 in fig. 9B to that with ID=1 in fig. 9C; see also col. 9, lines 51-56], the previously combined art does not explicitly teach that the waveform information is one of multiple sets of waveform information stored in the storage unit, each set of the waveform information independently corresponding to a different appearance of the visual object. Bochereau teaches waveform information that is one of multiple sets of waveform information, each set of the waveform information independently corresponding to a different appearance of a visual object [see in col. 3, lines 30-41 the different waveform parameters differing for a visual object based on the different appearances of relative rigidity of the virtual object]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Bochereau, before the effective filing date of the claimed invention, to further modify the apparatus taught by Yokoyama and modified by Khare to explicitly specify that each set of the waveform information independently corresponds to a different appearance of the visual object, as per the teachings of Bochereau. The motivation for this obvious combination would be to enable greater immersion or realism for the user, as taught by Bochereau [see e.g. col. 2, lines 11-13]. Claim 3 is rejected under 35 U.S.C. 103 as being unpatentable over Yokoyama in view of Khare, as applied to claim 1 above, and further in view of Nakagawa et al., US PGPUB 2024/0168560 A1 (hereinafter as Nakagawa). Regarding claim 3, the rejection of independent claim 1 is fully incorporated. The previously combined art does not explicitly teach that the storage unit stores accumulated information including differential information related to differences between the updated waveform information and preset waveform information, or accumulated information of the updated waveform information. Nakagawa teaches storing accumulated information including differential information related to differences between updated waveform information and preset waveform information, or accumulated information of updated waveform information [see [0169] and [0174] indicating saving errors to the database; see also in [0127]-[0128] the differences/changes saved to the waveform values]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Nakagawa, before the effective filing date of the claimed invention, to further modify the storage unit taught by Yokoyama and modified by Khare to explicitly specify storing accumulated information including differential information related to differences between updated waveform information and preset waveform information, or accumulated information of updated waveform information, as per the teachings of Nakagawa. The motivation for this obvious combination would be to enable recreating personalized tactile sensation by generating the corresponding vibration waveforms based on the saved error information, as taught by Nakagawa [see e.g. [0175]-[0176]] which would ultimately save storage space and yield the same final updated waveforms. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Yokoyama in view of Khare, as applied to claim 1 above, and further in view of Weddle et al., US Patent No. 9,063,570 B2 (hereinafter as Weddle). Regarding claim 4, the rejection of independent claim 1 is fully incorporated. Yokoyama further teaches displaying the visual object and a user input interacting with the visual object initiating the tactile feedback [note the object displayed in fig. 4; see also fig. 11, steps S3-S8 and related text]. Khare further teaches that the display unit displays an editing UI element for editing the waveform information on a screen, and the waveform editing unit edits the waveform information based on an operation of the editing UI element [note in col. 9, lines 44-53 the editing pane 406 including different UI elements for editing the waveform information as shown in the signal preview 408 displayed on the same interface 400A in fig. 4A]. The previously combined art does not explicitly teach an editing icon for editing the waveform information that is displayed on a screen on which the visual object is displayed. Weddle teaches editing icons as an example of UI haptic control elements for editing waveform information [note col. 6, lines 36-40 and note haptic control 320 including icons]. Weddle further teaches displaying the haptic control interface on the screen responsive to user input and based on current context [see fig. 7, 710-730 and related text]. It would have been obvious to one of ordinary skill in the art having the teachings of the previously combined art and Weddle, before the effective filing date of the claimed invention, to further modify the apparatus taught by Yokoyama and modified by Khare by applying Weddle’s teaching of displaying the haptic control interface including an editing icon on the screen responsive to user input and based on current context to the context of haptic-feedback-triggering user input on the displayed virtual object taught by Yokoyama and modified by Khare to include an editing UI element for editing the haptic waveform information on the screen, such that the editing icon for editing the waveform information is displayed on the screen on which the visual object is displayed. The motivation for this obvious combination would be to enable providing a more immersive experience to the user in which the user is also able to adjust the feedback based on preferences related to a certain interaction context with the display, as taught by Weddle [see e.g. col. 1, lines 46-62 and col. 2, lines 33-38]. Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Examiner notes from the cited art: US 2021/0165491 Al, Sun et al., which teaches generating different tactile sensations based on information of several attributes of a visual object, such as texture, contour, and roughness [see front figure and [0138]]. Any inquiry concerning this communication or earlier communications from the examiner should be directed to MARIA S AYAD whose telephone number is (571)272-2743. The examiner can normally be reached Monday-Friday, 7:30 am - 4:30 pm. Alt, Friday, EST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Adam Queler can be reached at (571) 272-4140. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /MARIA S AYAD/Primary Examiner, Art Unit 2172
Read full office action

Prosecution Timeline

Jan 30, 2024
Application Filed
Oct 17, 2025
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12554263
DRONE-ASSISTED VEHICLE EMERGENCY RESPONSE SYSTEM
2y 5m to grant Granted Feb 17, 2026
Patent 12549436
INTERNET OF THINGS CONFIGURATION USING EYE-BASED CONTROLS
2y 5m to grant Granted Feb 10, 2026
Patent 12474181
METHOD FOR GENERATING DIAGRAMMATIC REPRESENTATION OF AREA AND ELECTRONIC DEVICE THEREOF
2y 5m to grant Granted Nov 18, 2025
Patent 12443856
DECISION INTELLIGENCE SYSTEM AND METHOD
2y 5m to grant Granted Oct 14, 2025
Patent 12443272
Proactive Actions Based on Audio and Body Movement
2y 5m to grant Granted Oct 14, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
33%
Grant Probability
50%
With Interview (+17.1%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 159 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month