Prosecution Insights
Last updated: April 19, 2026
Application No. 18/023,796

INFORMATION PROCESSING DEVICE, USER TERMINAL, CONTROL METHOD, NON-TRANSITORY COMPUTER-READABLE MEDIUM, AND INFORMATION PROCESSING SYSTEM

Final Rejection §101§102
Filed
Feb 28, 2023
Examiner
ARMSTRONG, JONATHAN D
Art Unit
3645
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
NEC Corporation
OA Round
3 (Final)
52%
Grant Probability
Moderate
4-5
OA Rounds
3y 9m
To Grant
54%
With Interview

Examiner Intelligence

Grants 52% of resolved cases
52%
Career Allow Rate
218 granted / 415 resolved
+0.5% vs TC avg
Minimal +2% lift
Without
With
+1.5%
Interview Lift
resolved cases with interview
Typical timeline
3y 9m
Avg Prosecution
63 currently pending
Career history
478
Total Applications
across all art units

Statute-Specific Performance

§101
3.5%
-36.5% vs TC avg
§103
55.6%
+15.6% vs TC avg
§102
20.5%
-19.5% vs TC avg
§112
18.4%
-21.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 415 resolved cases

Office Action

§101 §102
Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claims 1-3, 7-9, and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Stanek (US 2020/0137509 A1). Regarding claim 1, Stanek discloses information processing device comprising: at least one memory storing instructions [[0033] functional blocks (e.g., processors or memories) may be implemented in a single piece of hardware]; and at least one processor configured to execute the instructions to [[0033] (e.g., a general purpose signal processor or a block of random access memory, hard disk, or the like) or multiple pieces of hardware; [0125] set of instructions intended to cause a system having an information processing capability to perform a particular function]: receive audio information using a microphone [[0104] remote speaker microphone (RSM) device; [0114] communications device transmitter 100, 100 a, 300 may comprise at least one microphone 102, 310], first position information of a first user terminal, and first direction information of the first user terminal from the first user terminal, and receive second position information of a second user terminal and second direction information of the second user terminal from the second user terminal [[abstract] system may include remote speaker/transmitting device(s) and listener/receiving device(s). The speaker/transmitting device(s) may send real-time location information with audio or separate from the audio to the listener/receiving device(s). The listener/receiving device(s) use the speaker/transmitting device(s) location information relative to the listener/receiving device location and orientation to perform audio processing on the transmitted audio signal; [0036] each “location awareness” stereo headset is both a transmit and receive device or system, however, the system can include devices that can be transmitter-only or receive-only. Transmitter-only devices may be configured to transmit geospatial data in addition to audio voice communications.]; and output the audio information to the second user terminal, if a first position indicated by the first position information and a second position indicated by the second position information are within a predetermined distance [[0041] distance cues may include the loss of amplitude, the loss of high frequencies, and the ratio of the direct signal to the reverberated signal. Depending on where the source is located, the head acts as a barrier to change the timbre, intensity, and spectral qualities of the sound, helping the brain orient where the sound emanated from. These minute differences between the two ears are known as interaural cues.; [0042] with the 3-axis digital compass in the location awareness headset or device, a user may turn their head left or right or look up or down and the perceived audio direction automatically compensates for the rotation; [0043] direction of the perceived sound audibly informs User A of the relative orientation/direction where a remote talker/device (User B) is located. The relative volume of the voice or sound can also be adjusted to inform users of distance as well as direction (e.g., louder=near, softer=far). This directional information is particularly beneficial in situations where users are separated by distance or without the benefit of visual contact.]. wherein the at least one processor is further configured to execute the instructions to: generate region information that specifies a region for which the first position serves as a reference [[0002] directional awareness audio communications system configured to extract and utilize speaker location data to process incoming speaker audio to spatially position the audio in 3D space in a manner that provides the listener(s) with the perception that the audio is coming from a relative “geographical” direction of the remote speaker], and output the audio information, if the region information encompasses the second position information [[0052] audio signal from various angles may be compared to triangulate the position of the transmitter; [0055] mono or stereo signal; [0057] audio processing takes incoming mono or stereo audio and converts it into 3D spatial binaural (stereo) audio to produce the desired spatial effect], and if an angle formed by a direction indicated by the first direction information and a direction indicated by the second direction information is no greater than an angular threshold [[0042] a user may turn their head left or right or look up or down and the perceived audio direction automatically compensates for the rotation. The perceived audio still emanates from a fixed direction in space corresponding to the location coordinates of the person speaking. The spatial effect is similar to turning your head while listening to a person standing in a fixed location in the same room or nearby. As an example, if the voice from the remote user is perceived to be directly in front of a user's face, if the user rotates their head to the left by 90 degrees, the voice from the same remote user (provided the user does not move) is then perceived to be coming from the right; [0110] certain embodiments provided altered audio pitch, altered audio amplitude and/or introduce synthesized voice, tone, and/or haptic sensation (vibration) to communicate distance in addition to direction]. Regarding claim 3, Stanek discloses the information processing device according to claim 1, wherein the at least one processor is further configured to execute the instructions to generate sound localization information for which the first position serves as a sound localization position, based on the first position information, the second position information, and the second direction information, and further output the sound localization information [[0040] localization can be described in terms of three-dimensional position: the azimuth or horizontal angle, the elevation or vertical angle, and the distance (for static sounds) or velocity (for moving sounds).]. Regarding claim 7, Stanek discloses the information processing device according to claim 3 wherein the at least one processor is further configured to execute the instructions to transmit the audio information and the sound localization information to the second user terminal [[0036] each “location awareness” stereo headset is both a transmit and receive device or system, however, the system can include devices that can be transmitter-only or receive-only. Transmitter-only devices may be configured to transmit geospatial data in addition to audio voice communications.]. Regarding claim 8, Stanek discloses the information processing device according to claim 3 wherein the at least one processor is further configured to execute the instructions to subject the audio information to a sound localization process based on the audio information and the sound localization information, and transmit the audio information subjected to the sound localization process to the second user terminal [[0102] 3D audio processing techniques to provide a directional hearing experience by utilizing real-time or near real-time geospatial data to communicate directional information (from a current location toward a remote location) by processing audio to spatially position the audio communications to sound to the receiver as though it was originating from the direction of the remote transmitter's location. In an exemplary embodiment, 3D audio techniques enhance sound localization and allow the user to perceive the direction of the audio source by creating a position relationship between the source of the sound and ears of the use]. Regarding claim 9, Stanek discloses the information processing device according to claim 8, wherein the at least one processor is further configured to execute the instructions to: if the audio information is installed virtually in an object related to the audio information, receive object position information of the object from the first user terminal, and receive a browse request for installation position information as to where the audio information is virtually installed from the second user terminal [[0048] virtual reality headgear], register the object position information into storage means as the installation position information if the object position information has been received, or register the first position information into the storage means as the installation position information if the object position information has not been received [[0067] process is repeated for many places in the virtual environment to create an array of head-related transfer functions for each position to be recreated], and transmit installation position information registered in the storage means to the second user terminal [[0105] virtual reality headsets with attached or embedded audio components may comprise circuitry to identify its location coordinates or position using GPS, Global Navigation Satellite System (GNSS), BLE Beacon, WiFi Access Point, Altimeter, Inertial navigation system (INS), or any suitable location identification technology and the circuitry to include or encode the geospatial location coordinates onto or with the transmitted audio]. Regarding claim 20, Stanek discloses the information processing device according to claim 1 wherein the at least one processor is further configured to execute the instructions to determine whether to output the audio through the speaker [[abstract] system may include remote speaker/transmitting device(s) and listener/receiving device(s). The speaker/transmitting device(s) may send real-time location information with audio or separate from the audio to the listener/receiving device(s)], based on first attribute information of a first user who uses the first user terminal [[0122] perform the function, regardless of whether performance of the function is disabled, or not enabled, by some user-configurable setting.]. Response to Arguments Applicant’s arguments, see pgs. 6-7 bridging, filed 12/11/2025, with respect to claims 1-3, 7-9, and 20 have been fully considered and are persuasive. The rejection under 35 U.S.C. 101 of 9/12/2025 has been withdrawn. Applicant's arguments filed 12/11/2025 have been fully considered but they are not persuasive. The remaining arguments pertaining to 35 U.S.C. 102 seem to amount to a general allegation that the claims define a patentable invention without specifically pointing out how the language of the claims patentably distinguishes them from the references. In summary, Stanek is disclosing an acoustic positioning system which is spatially aware in order to ultimately convey to the listener a 3D stereo audio sound which is perceived as having originated from a particular distance and direction. Conclusion THIS ACTION IS MADE FINAL. Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to JONATHAN D ARMSTRONG whose telephone number is (571)270-7339. The examiner can normally be reached M - F 9am-5pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Isam Alsomiri can be reached on 571-272-6970. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /JONATHAN D ARMSTRONG/ Examiner, Art Unit 3645
Read full office action

Prosecution Timeline

Feb 28, 2023
Application Filed
Feb 22, 2025
Non-Final Rejection — §101, §102
May 23, 2025
Response Filed
Sep 09, 2025
Non-Final Rejection — §101, §102
Nov 12, 2025
Examiner Interview Summary
Nov 12, 2025
Applicant Interview (Telephonic)
Dec 11, 2025
Response Filed
Feb 04, 2026
Final Rejection — §101, §102 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12566264
ENHANCED RESOLUTION SPLIT APERTURE USING BEAM SEGMENTATION
2y 5m to grant Granted Mar 03, 2026
Patent 12535001
DOWNHOLE ACOUSTIC SYSTEM FOR DETERMINING A RATE OF PENETRATION OF A DRILL STRING AND RELATED METHODS
2y 5m to grant Granted Jan 27, 2026
Patent 12510644
Ultrasonic Microscope and Carrier for carrying an acoustic Pulse Transducer
2y 5m to grant Granted Dec 30, 2025
Patent 12504525
OBJECT DETECTION DEVICE
2y 5m to grant Granted Dec 23, 2025
Patent 12495789
ULTRASONIC GENERATOR AND METHOD FOR REPELLING MOSQUITO IN VEHICLE USING THE SAME
2y 5m to grant Granted Dec 16, 2025
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

4-5
Expected OA Rounds
52%
Grant Probability
54%
With Interview (+1.5%)
3y 9m
Median Time to Grant
High
PTA Risk
Based on 415 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month