Prosecution Insights
Last updated: April 19, 2026
Application No. 18/406,363

INFORMATION PROCESSING DEVICE

Final Rejection §101§103
Filed
Jan 08, 2024
Examiner
MORFORD, ALEXANDRA ROBYN
Art Unit
3658
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Toyota Jidosha Kabushiki Kaisha
OA Round
2 (Final)
57%
Grant Probability
Moderate
3-4
OA Rounds
2y 4m
To Grant
99%
With Interview

Examiner Intelligence

Grants 57% of resolved cases
57%
Career Allow Rate
4 granted / 7 resolved
+5.1% vs TC avg
Strong +60% interview lift
Without
With
+60.0%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
41 currently pending
Career history
48
Total Applications
across all art units

Statute-Specific Performance

§101
16.8%
-23.2% vs TC avg
§103
40.5%
+0.5% vs TC avg
§102
14.3%
-25.7% vs TC avg
§112
27.4%
-12.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 7 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Status of Claims Claims 1-5 are currently pending and are being hereby examined herein. Response to Amendments/Remarks Any reference to the prior office action refers to the nonfinal rejection dated 10 July 2025. The rejection under 35 U.S.C. 112(b) from the prior office action was overcome by amendment. Applicant's arguments with respect to the rejection under 35 U.S.C. 101 have been fully considered but they are not persuasive. The claims, as amended, fail to amount to significantly more than the judicial exception because the additional elements are well-understood, routine, and conventional activity previously known to the industry, specified at a high level of generality (see MPEP 2106.05(d)), or else they are insignificant pre-solutionary activity in the form of mere data gathering or insignificant post-solutionary activity in the form of mere data outputting (see numerous court decisions pertaining to observations, evaluations, judgements, and opinions, such as the findings from Electric Power Group where it was found that collecting information, analyzing it, and outputting certain results of the collection and analysis was not significantly more than the judicial exception – see MPEP 2106.05(d)(II)). See full rejection analysis below. Applicant’s arguments with respect to the rejections from the prior office action under 35 U.S.C. 102 (Claims 1-3 and 5) and 35 U.S.C. 103 (Claim 4) have been fully considered and are persuasive. Therefore, the rejections have been withdrawn. However, upon further consideration, new rejections under 35 U.S.C. 103 have been issued for Claims 1-5 (see below). The new rejections were necessitated by amendment. Claim Objections Claim 1 objected to because of the following informalities: “associates the content” should be “associates [[the]] content”. “the latest version” should be “[[the]] a latest version”. “the received update information” should be “the updated definition information”. “the utterance by user” should be “the utterance by the user”. Claim 2 objected to because of the following informalities: “the predetermined device is able” should be “the [[predetermined]] device is able”. “the predetermined device is not able” should be “the [[predetermined]] device is not able”. Claim 5 objected to because of the following informality: “the processor is further configured to” should be deleted. Appropriate corrections are required. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-5 are rejected under 35 U.S.C 101 because the claimed invention is directed to an abstract idea without significantly more. Step 1: Claims 1-5 are directed to a system. Therefore, Claims 1-5 meet the requirements to be considered a statutory category. Step 2A Prong 1: The abstract idea (mental process) recited in Claim 1-5 is: update the definition information based on the received update information (Claim 1) This in an abstract idea because it could be reasonable completed in the human mind. For example, the human mind can update its assumptions based on new information. The additional limitations in Claims 1-5 are as follows: An information processing device installed in a vehicle (Claim 1) a speakerphone including a microphone and a speaker (Claim 1) a processor (Claim 1) transmit confirmation information to a server by wireless communication, the confirmation information including version information of definition information stored at the vehicle, wherein the definition information is information that associates the content of an utterance by a user in the vehicle with a response to the utterance, and the utterance is an instruction to a device mounted in the vehicle to execute a predetermined function (Claim 1) receive updated definition information from the server to update the definition information when the confirmation information indicates that the definition information stored at the vehicle is not the latest version (Claim 1) the microphone of the speakerphone captures the utterance by the user (Claim 1) the speaker of the speakerphone outputs the response to the utterance based on the updated definition information (Claim 1) acquire a specific utterance by the user captured by the microphone, the specific utterance instructing a specific device in the vehicle to execute a specific function (Claim 3) output the response to the specific utterance via the speaker based on the updated definition information and whether the specific device is able to execute the specific function (Claim 3) Step 2A Prong 2: Additional elements a and b fail to integrate the abstract idea into a practical application because they merely generally link the use of the judicial exception to a particular technological environment (see MPEP 2106.05(a)). Additional element c fails to integrate the abstract idea into a practical application because it is merely applying the abstract idea to one or more generic computing components (see MPEP 2106.05(f)). Additional elements d, e, f, g, h, and i fail to integrate the abstract idea into a practical application because they are merely insignificant pre-solution and/or post-solution activity (see MPEP 2106.05(g)). Step 2B: the additional elements individually and in combination fail to amount to significantly more than the judicial exception because the Office takes Official Notice that they are well-understood, routine, and conventional activity previously known to the industry, specified at a high level of generality (see MPEP 2106.05(d)), or else they are insignificant pre-solutionary activity in the form of mere data gathering (additional elements d, e, f, and h) or insignificant post-solutionary activity in the form of mere data outputting (additional elements g and i) (see numerous court decisions pertaining to observations, evaluations, judgements, and opinions, such as the findings from Electric Power Group where it was found that collecting information, analyzing it, and outputting certain results of the collection and analysis was not significantly more than the judicial exception – see MPEP 2106.05(d)(II)). As one possible suggestion to overcome the rejection under 35 U.S.C. 101, Applicant could roll the limitations of Claim 3 into Claim 1 and add a practical application of providing the function requested by the user, which is supported by paragraph [0038] of the specification (“the specific function is executed by the specific device”). Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claim 1 is rejected under 35 U.S.C. 103 as being unpatentable over U.S. Pub. No. 2023/0297362 (Ding et al., hereinafter, Ding) in view of U.S. Pub. No. 2002/0032510 (Turnbull et al., hereinafter, Turnbull). Regarding Claim 1, Ding discloses An information processing device installed in a vehicle (see at least [0004], [0037]-[0039], [0067], [0295], and FIG. 24: “FIG. 24 is a structural block diagram of a terminal or server according to an embodiment of the present application”), the information processing device comprising: a processor (see at least [0295] and FIG. 24: processor 2420) configured to transmit confirmation information to a server by wireless communication, the confirmation information including version information of definition information stored at the vehicle (see at least [0068] and [0080]: “In S101, an update request is sent to a cloud, wherein the update request includes a vehicle identifier, and the update request is used for enabling the cloud to obtain version information of vehicle voice scene data and version information of at least one piece of cloud voice scene data in real time to determine whether updatable voice scene data exists or not and determine target voice scene data under the condition that the updatable voice scene data exists.”; “The update request in step S101 may be sent to the cloud when the vehicle is powered on. Specifically, every time when powered on, the vehicle may actively request the cloud to determine whether the updatable voice scene data exists and determine and transmit target voice scene data under the condition that the updatable voice scene data exists, to implement update of the vehicle voice scene data.”), wherein the definition information is information that associates the content of an utterance by a user in the vehicle with a response to the utterance (see at least [0069]-[0070]: “the voice scene data may include navigation scene data. A correspondence in the navigation scene data may be a correspondence between voice information of the user “I want to look at the stars” and a synthetic voice “Entering the star mode for you” and a correspondence between voice information of the user “It’s stuffy” and a synthetic voice “Opening the window to get some air for you”, or a correspondence between voice information of the user “I want to look at the stars” and opening the sunroof of the vehicle as well as a synthetic voice “Entering the star mode for you” and a correspondence between voice information of the user “It’s stuffy” and opening the side window of the vehicle as well as a synthetic voice “Opening the window to get some air for you”.”), and the utterance is an instruction to a device mounted in the vehicle to execute a predetermined function (see at least [0069]-[0070]: “A correspondence in the navigation scene data may be a correspondence between voice information of the user “I want to look at the stars” and a synthetic voice “Entering the star mode for you””); receive updated definition information from the server to update the definition information when the confirmation information indicates that the definition information stored at the vehicle is not the latest version (see at least [0084], [0104]-[0105], FIG. 1, and FIG. 3: “In S102, the target voice scene data returned by the cloud is received.”; “determine whether version information newer than the version information of the vehicle voice scene data exists”); and update the definition information based on the received update information (see at least [0085], FIG. 1, and FIG. 2: “In S103, the vehicle voice scene data is updated into the target voice scene data.”), wherein …captures the utterance by the user (see at least [0070], [0076], and [0081]: “voice information of the user is received”), and … outputs the response to the utterance based on the updated definition information (see at least [0069]-[0070], [0075]-[0076], and [0081]-[0082]: “For example, when the vehicle receives voice information of the user “I want to look at the stars” and may not give any feedback, the vehicle may label the voice information as a no-feedback voice. When obtaining “I want to look at the stars”, the cloud may update original-version cloud voice scene data to obtain new-version cloud voice scene data, the new-version cloud voice scene data including a correspondence between the voice information of the user “I want to look at the stars” and an interaction task, so that the update speed of the cloud voice scene data may be improved. Then, the cloud may provide the new-version cloud voice scene data for the vehicle according to an update request, such that the vehicle may execute the corresponding interaction task when receiving the voice information “I want to look at the stars” next time or after a preset time threshold.”; “A correspondence in the navigation scene data may be a correspondence between voice information of the user “I want to look at the stars” and a synthetic voice “Entering the star mode for you””). Ding does not explicitly disclose a speakerphone including a microphone and a speaker, the microphone of the speakerphone captures the utterance by the user, the speaker of the speakerphone outputs the response to the utterance. Turnbull, in the same field of vehicle systems, and therefore analogous art, teaches a speakerphone including a microphone and a speaker (see at least FIG. 6 and FIG. 17: rearview mirror assembly 10 includes microphone assembly 140 and can include speaker 500), the microphone of the speakerphone captures the utterance by the user (see at least [0013], [0015]-[0016], [0124]-[0139], [0178], and [0203]: “a voice recognition circuit carried by the mirror mounting structure and coupled to a microphone for receiving voice signals from a vehicle occupant”; “the voice recognition circuit may be used to recognize any spoken command during such time that a call is not in progress”; “The microphone and voice recognition portions of the system may be utilized by the driver to input inquiries such as "identify closest gas station."”), the speaker of the speakerphone outputs the response to the utterance (see at least [0012]-[0014], [0203], and [0219]: “a speech synthesizer circuit carried by the mirror mounting structure for generating synthesized voice audio signals”; “The microphone and voice recognition portions of the system may be utilized by the driver to input inquiries such as "identify closest gas station." The system may then access the downloaded information and either display the location of the closest gas station on the map display and/or play back a synthesized audible message identifying the location of the gas station and giving directions”). It would have been obvious, before the effective filing date of the invention, with a reasonable expectation of success, to one having ordinary skill in the art, to combine the rearview mirror (i.e., speakerphone) of Turnbull with the information processing device of Ding because one of ordinary skill would understand there must be an apparatus for inputting sound (i.e., microphone or equivalent) and apparatus for outputting sound (i.e., speaker or equivalent) in Ding and Turnbull provides a specific solution with the advantage “integration of the components of the invention into a single accessory such as a rearview mirror assembly makes the system much easier and less costly to install” (see at least Turnbull [0182]). Claims 2, 3, and 5 are rejected under 35 U.S.C. 103 as being unpatentable over Ding in view of Turnbull in further view of U.S. Pub. No. 2019/0115014 (Hansen et al., hereinafter, Hansen). Regarding Claim 2, the Ding and Turnbull combination teaches the limitations of Claim 1. Additionally, Ding further discloses wherein the definition information includes information associating the content of the utterance with a first response to the utterance when the predetermined device is able to execute the predetermined function (see at least [0069]-[0070]: “ A correspondence in the navigation scene data may be a correspondence between voice information of the user “I want to look at the stars” and a synthetic voice “Entering the star mode for you” and a correspondence between voice information of the user “It’s stuffy” and a synthetic voice “Opening the window to get some air for you”, or a correspondence between voice information of the user “I want to look at the stars” and opening the sunroof of the vehicle as well as a synthetic voice “Entering the star mode for you” and a correspondence between voice information of the user “It’s stuffy” and opening the side window of the vehicle as well as a synthetic voice “Opening the window to get some air for you”.”). The Ding and Turnbull combination does not explicitly teach wherein the definition information includes information …associating the content of the utterance with a second response to the utterance when the predetermined device is not able to execute the predetermined function. Hansen, in the same field of vehicle systems, and therefore analogous art, teaches wherein the definition information includes information associating the content of the utterance with a first response to the utterance when the predetermined device is able to execute the predetermined function (see at least [0014]: “the system may ask “I can activate the most popular setting for you now. Would you like that?” After being provided an answer in the affirmative, the system may then locate popular configuration information database and identify activation configuration information therefrom. Once identified, the system will then retrieve the popular configuration information and use it to activate the vehicle feature in a manner in conformity with the most popular settings of either the vehicle occupant themselves or for similar users of the vehicle feature.”), and associating the content of the utterance with a second response to the utterance when the predetermined device is not able to execute the predetermined function (see at least [0014] and [0062]: “In step 360, since server 52 has determined that a remote activation option is not available for the selected vehicle feature, sever 54 will provide audio explanation information to telematics unit 30, or in certain embodiments—directly to audio system 36. This audio explanation information is generally in regards to the reasons why the vehicle feature will not permit a remote activation and can be announced to the vehicle occupant through audio system 36.”). It would have been obvious, before the effective filing date of the invention, with a reasonable expectation of success, to one having ordinary skill in the art, to combine the Ding and Turnbull combination with the teachings of Hansen to either remotely activate a feature for an occupant or help provide them instructions so they can activate the feature (see at least Hansen [0001]). Regarding Claim 3, the Ding and Turnbull combination teaches the limitations of Claim 1. As mentioned in Claim 1, Turnbull teaches a specific utterance by the user captured by the microphone and output the response to the specific utterance via the speaker (see mapping in Claim 1 and motivation to combine in Claim 1). Furthermore, Ding further discloses wherein the processor is further configured to acquire a specific utterance by the user…, the specific utterance instructing a specific device in the vehicle to execute a specific function (see at least [0069]-[0070]: “A correspondence in the navigation scene data may be a correspondence between voice information of the user “I want to look at the stars” and a synthetic voice “Entering the star mode for you””), and output the response to the specific utterance…based on the updated definition information and … the specific device is able to execute the specific function (see at least [0069]-[0070], [0075]-[0076], and [0081]-[0082]: “For example, when the vehicle receives voice information of the user “I want to look at the stars” and may not give any feedback, the vehicle may label the voice information as a no-feedback voice. When obtaining “I want to look at the stars”, the cloud may update original-version cloud voice scene data to obtain new-version cloud voice scene data, the new-version cloud voice scene data including a correspondence between the voice information of the user “I want to look at the stars” and an interaction task, so that the update speed of the cloud voice scene data may be improved. Then, the cloud may provide the new-version cloud voice scene data for the vehicle according to an update request, such that the vehicle may execute the corresponding interaction task when receiving the voice information “I want to look at the stars” next time or after a preset time threshold.”; “A correspondence in the navigation scene data may be a correspondence between voice information of the user “I want to look at the stars” and a synthetic voice “Entering the star mode for you””). The Ding and Turnbull combination does not explicitly teach output the response to the specific utterance…based on…whether the specific device is able to execute the specific function. Hansen, in the same field of vehicle systems, and therefore analogous art, teaches output the response to the specific utterance…based on…whether the specific device is able to execute the specific function (see at least [0014] and FIG. 3: “The term “heated seats” can be identified as a vehicle feature by VRS. Subsequently, the system will locate a vehicle feature information database and identify description information corresponding to “heated seats.” Once identified, the system will then retrieve the description information and provide it back to the occupant in an audible form. For instance, the system may activate the vehicle's stereo system to explain “there's a switch on the side of your seat that, when activated, will enable a heating coil in your seat to warm to a preselected temperature.” In certain instances, the system may also provide the description in a visual form through a display (i.e., providing one or more pictures of the seat heating switch). The system will also determine whether the occupant would like the vehicle feature activated. As follows, the system may prompt the occupant make an activation decision using the stereo system. For instance, the system may ask “I can activate the most popular setting for you now. Would you like that?” After being provided an answer in the affirmative, the system may then locate popular configuration information database and identify activation configuration information therefrom. Once identified, the system will then retrieve the popular configuration information and use it to activate the vehicle feature in a manner in conformity with the most popular settings of either the vehicle occupant themselves or for similar users of the vehicle feature.”). It would have been obvious, before the effective filing date of the invention, with a reasonable expectation of success, to one having ordinary skill in the art, to combine the Ding and Turnbull combination with the teachings of Hansen to either remotely activate a feature for an occupant or help provide them instructions so they can activate the feature (see at least Hansen [0001]). Regarding Claim 5, the Ding, Turnbull, and Hansen combination teaches the limitations of Claim 3. Additionally, Hansen further teaches wherein the response that is output indicates whether the specific device is able to execute the specific function based on whether the specific device is configured to execute the specific function (see at least [0014], [0059]-[0063], and FIG. 3: “In step 340, server 52 will determine if a feature activation request has been made.”; “In step 360, since server 52 has determined that a remote activation option is not available for the selected vehicle feature, sever 54 will provide audio explanation information to telematics unit 30, or in certain embodiments—directly to audio system 36.”; “In step 370, since server 52 has determined that remote activation is enabled for the selected vehicle feature, sever 54 will provide a remote activation command to telematics unit 30 (or in certain embodiments—directly to the vehicle feature)”; “The system will also determine whether the occupant would like the vehicle feature activated. As follows, the system may prompt the occupant make an activation decision using the stereo system. For instance, the system may ask “I can activate the most popular setting for you now. Would you like that?” After being provided an answer in the affirmative, the system may then locate popular configuration information database and identify activation configuration information therefrom.”). Note: the motivation to combine is the same as Claim 2. Claim 4 is rejected under 35 U.S.C. 103 as being unpatentable over Ding in view of Turnbull in view of Hansen in further view of U.S. Pub. No. 2021/0129780 (Mezaael et al., hereinafter, Mezaael). Regarding Claim 4, the Ding, Turnbull, and Hansen combination teaches the limitations of Claim 3. As previously discussed, Hansen teaches wherein the response to the output indicates whether the specific device is able to execute the specific function… (see at least [0014] and FIG. 3: “The term “heated seats” can be identified as a vehicle feature by VRS. Subsequently, the system will locate a vehicle feature information database and identify description information corresponding to “heated seats.” Once identified, the system will then retrieve the description information and provide it back to the occupant in an audible form. For instance, the system may activate the vehicle's stereo system to explain “there's a switch on the side of your seat that, when activated, will enable a heating coil in your seat to warm to a preselected temperature.” In certain instances, the system may also provide the description in a visual form through a display (i.e., providing one or more pictures of the seat heating switch). The system will also determine whether the occupant would like the vehicle feature activated. As follows, the system may prompt the occupant make an activation decision using the stereo system. For instance, the system may ask “I can activate the most popular setting for you now. Would you like that?” After being provided an answer in the affirmative, the system may then locate popular configuration information database and identify activation configuration information therefrom. Once identified, the system will then retrieve the popular configuration information and use it to activate the vehicle feature in a manner in conformity with the most popular settings of either the vehicle occupant themselves or for similar users of the vehicle feature.”). The Ding, Turnbull, and Hansen combination does not explicitly teach …indicates whether the specific device is able to execute the specific function based on whether the specific device is installed in the vehicle. Mezaael, in the same field of vehicle systems, and therefore analogous art, teaches …indicates whether the specific device is able to execute the specific function based on whether the specific device is installed in the vehicle (see at least [0014], [0025]-[0026], and FIG. 2: “An application on the mobile device of the user may synchronize the user profile while the vehicle computer may govern and assess those rules and notify the fleet manager for what has been applied and what cannot be applied due to limitations or other reasons.”; “the user profile 188 may include information of a user identity, vehicles associated with the user (e.g. vehicle identification numbers (VINs)), and/or vehicle setting/preferences of the user such as seat position settings, radio presets, climate control settings, UBI configurations, drive mode settings, or the like. Responsive to receiving the user profile 188 from the mobile device 128, the computing platform 104 may process the user profile 188 to determine specific settings for each ECU 168.”; “the snapshot 190 may be stored in the storage 110 of the computing platform 104 and include matching information of the user profile 188 and available features of the vehicle 102.”; “At operation 206, the computing platform 104 queries configurations as requested to determine the availability of the configurations. For instance, the computing platform 104 may be configured to query an ECU 168 specified in a configuration request to determine if the requested configuration is available. Additionally or alternatively, the computing platform 104 may use the vehicle snapshot 190 to perform the handshake verification to determine the availability of the requested configurations or an alternative solution. If the computing platform 104 determines the requested configuration is available, the process proceeds from operation 208 to operation 210 and the requested configuration is applied via the computing platform 104 and/or one or more ECUs 168. Otherwise, if no readily available configuration is available, the process proceeds to operation 212 and the computing platform 104 verifies if an alternative solution is available using the vehicle snapshot 190 as discussed above. If no alternative solution for the requested configuration is available, the process proceeds to operation 214 and the computing platform 104 outputs a message via the HMI controls 112.”). It would have been obvious, before the effective filing date of the invention, with a reasonable expectation of success, to one having ordinary skill in the art, to combine the teachings of Mezaael with the Ding, Turnbull, and Hansen combination because “different vehicles may vary in architecture, data, features, controllers and vehicle networks. A user profile compatible with one vehicle may not be compatible with others. This may potentially cause inconvenience to the user and/or a vehicle managing entity” (see at least Mezaael [0002]). Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ALEXANDRA ROBYN MORFORD whose telephone number is (571)272-6109. The examiner can normally be reached Monday - Friday 8:00 AM - 4:00 PM ET. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Thomas Worden can be reached at (571) 272-4876. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /A.R.M./Examiner, Art Unit 3658 /JASON HOLLOWAY/ Primary Examiner, Art Unit 3658
Read full office action

Prosecution Timeline

Jan 08, 2024
Application Filed
Jul 07, 2025
Non-Final Rejection — §101, §103
Sep 19, 2025
Response Filed
Oct 17, 2025
Final Rejection — §101, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594669
ROBOT CONTROL METHOD, ROBOT CONTROL SYSTEM, AND COMPUTER READABLE MEDIUM
2y 5m to grant Granted Apr 07, 2026
Patent 12576952
SENSOR CALIBRATION SYSTEM FOR WATERCRAFT AND WATERCRAFT
2y 5m to grant Granted Mar 17, 2026
Patent 12472632
OPERATION SYSTEM, OPERATION METHOD, AND STORAGE MEDIUM
2y 5m to grant Granted Nov 18, 2025
Patent 12358646
METHOD AND APPARATUS FOR CAPTURING NON-COOPERATIVE TARGET USING SPACE ROBOTIC ARM, AND NON-TRANSITORY STORAGE MEDIUM
2y 5m to grant Granted Jul 15, 2025
Study what changed to get past this examiner. Based on 4 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
57%
Grant Probability
99%
With Interview (+60.0%)
2y 4m
Median Time to Grant
Moderate
PTA Risk
Based on 7 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month