Prosecution Insights
Last updated: April 19, 2026
Application No. 18/584,088

SYSTEM AND METHOD FOR PROVIDING HAPTIC FEEDBACK

Non-Final OA §102
Filed
Feb 22, 2024
Examiner
TILLERY, RASHAWN N
Art Unit
2174
Tech Center
2100 — Computer Architecture & Software
Assignee
Sony Interactive Entertainment Inc.
OA Round
1 (Non-Final)
64%
Grant Probability
Moderate
1-2
OA Rounds
3y 10m
To Grant
76%
With Interview

Examiner Intelligence

Grants 64% of resolved cases
64%
Career Allow Rate
394 granted / 611 resolved
+9.5% vs TC avg
Moderate +12% lift
Without
With
+11.6%
Interview Lift
resolved cases with interview
Typical timeline
3y 10m
Avg Prosecution
32 currently pending
Career history
643
Total Applications
across all art units

Statute-Specific Performance

§101
5.1%
-34.9% vs TC avg
§103
61.3%
+21.3% vs TC avg
§102
22.8%
-17.2% vs TC avg
§112
5.4%
-34.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 611 resolved cases

Office Action

§102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . 1. This communication is responsive to the application filed 2/22/2024. 2. Claims 11 and 16-34 are pending in this application. Claims 11, 25 and 34 are independent claims. This action is made Non-Final. Claim Rejections - 35 USC § 102 3. In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. 4. Claim(s) 11 and 16-34 is/are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Shi et al (“Shi” US 2021/0373670). Regarding claim 11, Shi discloses a computer-implemented method comprising: obtaining audio components of media content (see fig 9, S302; e.g., obtain audio file containing multimedia file); selecting, by a classifier and from among the audio components of the media content (see paragraphs [0035], [0041], [0051] and [0077]-[0078]; e.g., neural network selects abrupt change in audio based on pitch), a most prominent audio component that is representative of a scene in the media content (see paragraphs [0045], [0060] and [0073]; e.g., abrupt change in scene audio based on pitch change), and generating an output haptics signal based at least on the most prominent audio component that was selected by the classifier from among the audio components of the media content (see paragraphs [0077]-[0078]; e.g., “obtaining a feature parameter of each of the audio segments; inputting the feature parameter of each of the audio segments to a trained deep neural network model; and determining the target audio segment from the multiple audio segments according to an output result of the deep neural network model…detecting an audio power of each of the audio segments; and controlling the vibration element in the computer device to perform a second type of vibration operation according to the audio power of each of the audio segments.”). Regarding claim 16, Shi discloses comprising: performing sound source separation the audio components to generate sub-components for each audio components, wherein the classifier selects the most prominent audio component based at least on the audio components and the generated sub-components (see fig. 9, S302, S304 and S900; also see paragraph [0041]; e.g., divide target audio into multiple audio segments and detect highest pitch). Regarding claim 17, Shi discloses, comprising: assigning each of the one or more audio components a respective prominence value, and wherein the output haptics signal is generated based at least on a prominence value of the selected most prominent audio component (see paragraphs [0042]-[0046]; e.g., pitch change range). Regarding claim 18, Shi discloses, comprising: transmitting the output haptics signal for output by one or more haptics actuators concurrent to the media content (see claim 1 above; e.g. “vibration operation according to the audio power of each of the audio segments”). Regarding claim 19, Shi discloses, wherein the audio components are contained in one or more separate audio channels (see paragraphs [0036] and [0048]; e.g., “the content of the multimedia file includes but is not limited to a music type, a commentary type, and so on.”). Regarding claim 20, Shi discloses, wherein the classifier selects the most prominent audio component based at least on audio channel information indicating a category of sound in the respective audio channel (see paragraph [0036]; e.g., “the content of the multimedia file includes but is not limited to a music type, a commentary type, and so on.”). Regarding claim 21, Shi discloses, wherein the classifier selects the most prominent audio component based at least on input media data representative of the media content (see paragraph [0036]; e.g., “the content of the multimedia file includes but is not limited to a music type, a commentary type, and so on.”). Regarding claim 22, Shi discloses, wherein: the media content is video game content, and the input media data comprises data relating to the state of the video game content (see paragraph [0048]; e.g., game content). Regarding claim 23, Shi discloses, wherein the output haptics signal is a multichannel haptics signal (see paragraphs [0042]-[0046]; e.g., vibration associated music, video or game). Regarding claim 24, Shi discloses, comprising selecting, by the classifier and from among the audio components of the media content, next-most prominent audio component that is representative of the scene in the media content, wherein the output haptics signal is generated based at least on the most prominent audio component and the next-most prominent audio component that were selected by the classifier from among the audio components of the media content (see paragraphs [0042]-[0046]; e.g., pitch change range). Claims 25-33 are similar in scope to claims 11 and 16-24, respectively, and are therefore rejected under similar rationale. Claim 34 is similar in scope to claim 11 and is therefore rejected under similar rationale. Conclusion 5. The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Yoo (US 2016/0175718). 6. Any inquiry concerning this communication or earlier communications from the examiner should be directed to RASHAWN N TILLERY whose telephone number is (571)272-6480. The examiner can normally be reached M-F 9:00a - 5:30p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, William L Bashore can be reached at (571) 272-4088. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /RASHAWN N TILLERY/Primary Examiner, Art Unit 2174
Read full office action

Prosecution Timeline

Feb 22, 2024
Application Filed
Feb 27, 2026
Response after Non-Final Action
Mar 23, 2026
Non-Final Rejection — §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12602701
INTERACTIVE MAP INTERFACE INCORPORATING CUSTOMIZABLE GEOSPATIAL DATA
2y 5m to grant Granted Apr 14, 2026
Patent 12547302
PAGE PRESENTATION METHOD, DISPLAY SYSTEM AND STORAGE MEDIUM
2y 5m to grant Granted Feb 10, 2026
Patent 12542871
DATA PROCESSING METHOD AND APPARATUS, DEVICE, AND READABLE STORAGE MEDIUM
2y 5m to grant Granted Feb 03, 2026
Patent 12536219
DIGITAL CONTAINER FILE FOR MULTIMEDIA PRESENTATION
2y 5m to grant Granted Jan 27, 2026
Patent 12524138
METHOD AND APPARATUS FOR ADJUSTING POSITION OF VIRTUAL BUTTON, DEVICE, STORAGE MEDIUM, AND PROGRAM PRODUCT
2y 5m to grant Granted Jan 13, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
64%
Grant Probability
76%
With Interview (+11.6%)
3y 10m
Median Time to Grant
Low
PTA Risk
Based on 611 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month