Prosecution Insights
Last updated: April 19, 2026
Application No. 17/768,852

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM

Non-Final OA §101§103
Filed
Apr 14, 2022
Examiner
TANK, ANDREW L
Art Unit
2141
Tech Center
2100 — Computer Architecture & Software
Assignee
Sony Group Corporation
OA Round
3 (Non-Final)
68%
Grant Probability
Favorable
3-4
OA Rounds
4y 0m
To Grant
99%
With Interview

Examiner Intelligence

Grants 68% — above average
68%
Career Allow Rate
366 granted / 538 resolved
+13.0% vs TC avg
Strong +31% interview lift
Without
With
+31.2%
Interview Lift
resolved cases with interview
Typical timeline
4y 0m
Avg Prosecution
43 currently pending
Career history
581
Total Applications
across all art units

Statute-Specific Performance

§101
12.0%
-28.0% vs TC avg
§103
37.5%
-2.5% vs TC avg
§102
28.6%
-11.4% vs TC avg
§112
13.5%
-26.5% vs TC avg
Black line = Tech Center average estimate • Based on career data from 538 resolved cases

Office Action

§101 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed 12/22/2025 in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 12/22/2025 has been entered. The following action is in response to the amendment and remarks of 12/22/2026. By the amendment, claims 1, 4 and 18-20 have been amended. No claims have been canceled or newly added. Claims 1-20 are pending and have been considered below. Response to Arguments/Amendment Applicant argues (Remarks 12/22/2025 page 8) that the amendment to the claims overcomes the corresponding 35 USC 101 rejections (Final Rejection 10/22/2025 pages 4-19). Applicant presents no additional arguments regarding how newly amended limitations overcome the 35 USC 101 rejections. The Examiner notes that the claims as amended remain rejected under 35 USC 101 for at least the reasons previously set forth (Final Rejection 10/22/2025 pages 3-19) and as presented in the updated rejections below. Applicant argues (Remarks 12/22/2025 pages 8-9) that the amendment to the claims overcomes the corresponding 35 USC 103 rejections of claims 1-20 over MORI in view of MATHEWS (Final Rejection 10/22/2025 pages 20-28). The Examiner agrees and the corresponding 35 USC 103 rejections over MORI in view of MATHEWS have been withdrawn. However, on further search and consideration, the prior art of SOLMAZ, US 2022/0161818 A1 effective filing 04/05/2019, was found to combine reasonably to meet the deficiency of MORI and MATHEWS as presented in the new grounds of rejection below. Claim Rejections - 35 USC § 101 35 U.S.C. 101 reads as follows: Whoever invents or discovers any new and useful process, machine, manufacture, or composition of matter, or any new and useful improvement thereof, may obtain a patent therefor, subject to the conditions and requirements of this title. Claims 1-20 are rejected under 35 U.S.C. 101 the claimed invention is directed to an abstract idea without significantly more. Regarding claim 1, claim 1 recites: “An information processing device comprising: processing circuity configured to: acquire a model having a structure of a neural network and input information input to the model; select, during operation of a moving body, at least one basis generation algorithm from among a plurality of different basis generation algorithms that each includes a process or set of rules based on a processing situation or a characteristic; and generate, in real time, basis information indicating a basis for an output of the model after the input information is input to the model based on state information indicating a state of the model after the input of the input information to the model using the selected at least one basis generation algorithm; and control the moving body based on the basis information.” Step 1 MPEP 2106.03: Claim 1 recites a device and is directed to a statutory category of invention. Step 2A Prong One, MPEP 2106.04: Claim 1 recites at least acquiring a model and information input to the model, selecting, during operation of a moving body, at least one basis generation algorithm from a plurality of different basis generation algorithms each including a process or set of rules based on a processing situation or a characteristic, generating, in real time, basis information indicating a basis for an output of the model after the input information is input to the model based on state information indicating a state of the model after the input of the input information to the model using the selected at least one basis generation algorithm and control the moving body based on the basis information (ex. a person operating a vehicle can model a process using information, choosing an explanation to use to explain the processed information from different explanations, and using the chosen explanation to explain why a model output information based on its input information and change operation of the vehicle based on the explanation). These limitations are directed the abstract idea of a mental process that can be practically performed in the human mind, with or without the use of a physical aid such as pen and paper, including observations, evaluations, judgements and opinions. See MPEP 2106.04(a)(2)(III). Step 2A Prong Two, MPEP 2106.04(d): Claim 1 further recites wherein the model has a structure of a neural network and the steps are executed by processing circuitry of an information processing device. The additional elements of the model being a neural network and the device having processing circuitry do not, alone or in combination, integrate the judicial exception into a practical application as they are presented as generic computer tools recited at a high level of generality. See MPEP 2106.04(d), 2016.05(f). Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 1, including the use of generic computer tools, do not amount to significantly more than the judicial exception. The use of these limitations merely amounts to instructions to implement the abstract idea on a computer by using generic computer tools to perform the abstract idea. See MPEP 2106.05(f). Regarding claim 2, claim 2 recites: “The information processing device according to claim 1, wherein the processing circuitry is configured to generate the basis information so as to indicate a basis of processing using an output of the model.” Step 1 MPEP 2106.03: Claim 2 depends from the device of claim 1 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 2 further recites that the basis information indicates a basis of processing using an output of the model. This additional element amounts to mere data gathering and output and does not, alone or in combination, integrate the judicial exception into a practical application. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 2 including what the basis information indicates does not amount to significantly more than the judicial exception. The use of this limitation amounts to mere data gathering. See MPEP 2106.05(g). Regarding claim 3, claim 3 recites: “The information processing device according to claim 1, wherein the model is for control of a device that autonomously acts, and the processing circuitry is configured to generate the basis information so as to indicate a basis of the control of the device after the input information is input to the model.” Step 1 MPEP 2106.03: Claim 3 depends from the device of claim 1 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 3 further recites that the model is for control of a device that autonomously acts and that the basis information indicates a basis of the control of the device after the input information is input to the model. These additional elements do not, alone or in combination, integrate the judicial exception into a practical application as they represent mere data gathering, an insignificant extra-solution activity, and general linking of the use of the judicial exception to a particular technological environment. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 3 including what the model is to be used for and what the basis information indicates do not amount to significantly more than the judicial exception. Mere data gathering is an insignificant extra-solution activity generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(g). Regarding claim 4, claim 4 recites: “The information processing device according to claim 1, wherein the moving body is autonomously movable, the model is for control of the moving body, and the processing circuitry is configured to generate the basis information so as to indicate a basis of the control of the moving body after the input information is input to the model.” Step 1 MPEP 2106.03: Claim 4 depends from the device of claim 1 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 4 further recites that the moving body is autonomously movable, model is for control of the moving body and that the basis information indicates a basis of the control of the moving body after the input information is input to the model. These additional elements do not, alone or in combination, integrate the judicial exception into a practical application as they represent mere data gathering, an insignificant extra-solution activity, and general linking of the use of the judicial exception to a particular technological environment. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 4 including what the model is to be used for and what the basis information indicates do not amount to significantly more than the judicial exception. Mere data gathering is an insignificant extra-solution activity generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(g). Regarding claim 5, claim 5 recites: “The information processing device according to claim 4, wherein the moving body is a vehicle operating by automatic driving.” Step 1 MPEP 2106.03: Claim 5 depends from the device of claim 4 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 5 further recites that the moving body is a vehicle operating by automatic driving. This additional element does not integrate the judicial exception into a practical application as it represent mere data gathering, an insignificant extra-solution activity, and general linking of the use of the judicial exception to a particular technological environment. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 5 including what the moving body is does not amount to significantly more than the judicial exception. Mere data gathering is an insignificant extra-solution activity generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(g). Regarding claim 6, claim 6 recites: “The information processing device according to claim 4, wherein the processing circuitry is configured to generate the basis information so as to indicate a basis of a movement direction of the moving body.” Step 1 MPEP 2106.03: Claim 6 depends from the device of claim 4 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 6 further recites the basis information indicates a basis of a movement direction of the movement body. This additional element does not integrate the judicial exception into a practical application as it represents mere data gathering, an insignificant extra-solution activity, and general linking of the use of the judicial exception to a particular technological environment. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 6 including what the basis information represents does not amount to significantly more than the judicial exception. Mere data gathering is an insignificant extra-solution activity generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(g). Regarding claim 7, claim 7 recites: “The information processing device according to claim 1, wherein the input information is sensor information detected by a sensor, and processing circuitry is configured to generate the basis information of the model to which the input information is input in response to the detection by the sensor.” Step 1 MPEP 2106.03: Claim 7 depends from the device of claim 1 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 7 further recites the input information is sensor information detected by a sensor and generating the basis information of the model in response to the input detection by the sensor. These additional elements do not, alone or in combination, integrate the judicial exception into a practical application as they represent mere data gathering, an insignificant extra-solution activity, and general linking of the use of the judicial exception to a particular technological environment. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 7 including the sensor information and what the basis information indicates do not amount to significantly more than the judicial exception. Mere data gathering is an insignificant extra-solution activity generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(g). Regarding claim 8, claim 8 recites: “The information processing device according to claim 1, wherein the input information is image information; and the model outputs a recognition result of image information in response to an input of the image information.” Step 1 MPEP 2106.03: Claim 8 depends from the device of claim 1 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 8 further recites the input information is image information and the model outputs a recognition of image information in response to an input of the image information. These additional element amount to mere data gathering and output and do not, alone or in combination, integrate the judicial exception into a practical application. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional element of Claim 8 of the image recognition model input/outputs do not amount to significantly more than the judicial exception as they amount to mere data gathering. See MPEP 2106.05(g). Regarding claim 9, claim 9 recites: “The information processing device according to claim 1, wherein the processing circuitry is configured to generate image information indicating a basis for an output of the model as the basis information.” Step 1 MPEP 2106.03: Claim 9 depends from the device of claim 1 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 9 further recites generating image information indicating the basis information. This additional element amounts to mere data gathering and output and does not, alone or in combination, integrate the judicial exception into a practical application. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional element of Claim 9 including the generation of image information indicating the basis does not amount to significantly more than the judicial exception. The use of this limitation amounts to mere data gathering. See MPEP 2106.05(g). Regarding claim 10, claim 10 recites: “The information processing device according to claim 9, wherein the processing circuitry is configured to generate a heat map indicating a basis for an output of the model as the basis information.” Step 1 MPEP 2106.03: Claim 10 depends from the device of claim 9 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 10 further recites generating a heatmap indicating the basis information. This additional element amounts to mere data gathering and output and does not, alone or in combination, integrate the judicial exception into a practical application. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 10 including the generating of a heatmap does not amount to significantly more than the judicial exception. The use of this limitation amounts to mere data gathering and output. See MPEP 2106.05(g). Regarding claim 11, claim 11 recites: “The information processing device according to claim 1, wherein the model includes a convolutional neural network (CNN).” Step 1 MPEP 2106.03: Claim 11 depends from the device of claim 1 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 11 further recites the model includes a CNN. The additional element of the model being a CNN does not integrate the judicial exception into a practical application as it is presented as a generic computer tool recited at a high level of generality. See MPEP 2106.04(d), 2016.05(f). Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 11 including the CNN does not amount to significantly more than the judicial exception. The use of this limitation merely amounts to instructions to implement the abstract idea on a computer by using generic computer tools to perform the abstract idea. See MPEP 2106.05(f). Regarding claim 12, claim 12 recites: “The information processing device according to claim 11, wherein the state information includes a state of a convolution layer of the model.” Step 1 MPEP 2106.03: Claim 12 depends from the device of claim 11 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 12 further recites the state information includes a state of a CNN layer. This additional element does not, alone or in combination, integrate the judicial exception into a practical application as it is directed to mere data gathering and output. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 12 including what state information is based on does not amount to significantly more than the judicial exception. The use of this limitation amounts to mere data gathering and output. See MPEP 2106.05(g). Regarding claim 13, claim 13 recites: “The information processing device according to claim 12, wherein the processing circuitry is configured to generate the basis information by gradient-weighted class activation mapping (Grad-CAM).” Step 1 MPEP 2106.03: Claim 13 depends from the device of claim 12 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: Claim 13 further recites generating basis information by gradient-weighted class activation mapping (Grad-CAM). Grad-CAM is directed to the abstract idea of mathematical concepts/equations. See MPEP 2106.04(a)(2)(I)(B). Step 2A Prong Two, MPEP 2106.04(d): All elements are part of the abstract idea above. Step 2B, MPEP 2106.05: All elements are part of the abstract idea above. Regarding claim 14, claim 14 recites: “The information processing device according to claim 1, wherein the input information is output information from another model, the model performs an output in response to an input of the output information from the other model, and the processing circuitry is configured to generate the basis information of the model to which the input information is input in response to the output from the other model.” Step 1 MPEP 2106.03: Claim 14 depends from the device of claim 1 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 14 further recites that the model performs an output in response to input of output information from another model and the basis information of the model is generated. These additional elements do not, alone or in combination, integrate the judicial exception into a practical application as they represent mere data gathering, an insignificant extra-solution activity, and general linking of the use of the judicial exception to a particular technological environment. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 14 including the input/output and generating do not amount to significantly more than the judicial exception. Mere data gathering is an insignificant extra-solution activity generally linking the use of the judicial exception to a particular technological environment. See MPEP 2106.05(g). Regarding claim 15, claim 15 recites: “The information processing device according to claim 1, wherein the state information includes an output result of the model after the input information is input to the model.” Step 1 MPEP 2106.03: Claim 15 depends from the device of claim 1 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 15 further recites the state information including an output result of the model after input information is input to the model. This additional element does not, alone or in combination, integrate the judicial exception into a practical application as it is directed to mere data gathering and output. See MPEP 2106.05. Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 15 including what the state information includes does not amount to significantly more than the judicial exception. The use of this limitation amounts to mere data gathering and output. See MPEP 2106.05(g). Regarding claim 16, claim 16 recites: “The information processing device according to claim 15, wherein the processing circuitry is configured to generate the basis information by processing related to local interpretable model-agnostic explanations (LIME).” Step 1 MPEP 2106.03: Claim 16 depends from the device of claim 15 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: Claim 16 further recites generating basis information by processing related to local interpretable model-agnostic explanations (LIME). LIME is directed to the abstract idea of mathematical concepts/equations. See MPEP 2106.04(a)(2)(I)(B). Step 2A Prong Two, MPEP 2106.04(d): All elements are part of the abstract idea above. Step 2B, MPEP 2106.05: All elements are part of the abstract idea above. Regarding claim 17, claim 17 recites: “The information processing device according to claim 1, wherein the processing circuitry is configured to cause the basis information to be displayed.” Step 1 MPEP 2106.03: Claim 17 depends from the device of claim 1 and is similarly drawn to a statutory category. Step 2A Prong One, MPEP 2106.04: The analysis of the parent is incorporated. Step 2A Prong Two, MPEP 2106.04(d): Claim 17 further recites the circuitry causing the basis information to be displayed. The additional element circuitry causing display does not integrate the judicial exception into a practical application as it is presented as a generic computer tool recited at a high level of generality. See MPEP 2106.04(d), 2016.05(f). Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 17 including the use of generic computer tools to display information does not amount to significantly more than the judicial exception. The use of this limitation merely amounts to instructions to implement the abstract idea on a computer by using generic computer tools to perform the abstract idea. See MPEP 2106.05(f). Regarding claim 18, claim 18 recites limitations similar to claim 1 and is similarly rejected following Step 2A Prong One, Step 2A Prong Two and Step 2B. Claim 18, however, is drawn to an information processing method for executing processing rather than an information processing device as recited by claim 1. Step 1 MPEP 2106.03: Claim 18 recites a method and is directed to a statutory category of invention. Regarding claim 19, claim 19 recites limitations similar to claim 1 and is similarly rejected following Step 2A Prong One, Step 2A Prong Two and Step 2B. Claim 19, however, is drawn to an a non-transitory computer readable medium storing an information processing program that when executed causes processing circuitry to perform a method rather than an information processing device as recited by claim 1. Step 1 MPEP 2106.03: Claim 19 recites a non-transitory computer-readable medium and is directed to a statutory category of invention. Regarding claim 20, claim 20 recites: “An information processing device that performs an action using a machine learning model, the information processing device comprising: a sensor; and a memory including a plurality of different basis generation algorithms that each includes a process or set of rules and is configured to generate basis information of the action; and processing circuitry configured to select, during the action, at least one basis generation algorithm of a plurality of different basis generation algorithms based on whether a real-time property is required, whether a locally approximated basis is desired, or whether a directionality of activating a concept is to be taken into account; output information indicating a basis of the action based on the basis information generated based on the at least one selected basis generation algorithm of the plurality of different basis generation algorithms and sensor information; and cause the information processing device to stop the action based on the basis information.” Step 1 MPEP 2106.03: Claim 20 recites a device and is directed to a statutory category of invention. Step 2A Prong One, MPEP 2106.04: Claim 20 recites select, during the action, at least one basis generation algorithm of a plurality of different basis generation algorithms based on whether a real-time property is required, whether a locally approximated basis is desired, or whether a directionality of activating a concept is to be taken into account; outputting information indicating a basis of an action based on basis information generated based at least one basis generation algorithm of a plurality of different basis generation algorithms, each including a process or set of rules and is configured to generate basis information of the action; and cause the information processing device to stop the action based on the basis information. These limitations are directed the abstract idea of a mental process that can be practically performed in the human mind, with or without the use of a physical aid such as pen and paper, including observations, evaluations, judgements and opinions (ex. a person, while doing an action like walking, choosing an explanation for stopping the action from different explanations based on real-time data and outputting that explanation and stopping the action). See MPEP 2106.04(a)(2)(III). Step 2A Prong Two, MPEP 2106.04(d): Claim 20 further recites a an information processing device performing the action using a machine learning model and gathering information using a sensor. The additional elements of the sensor and information processing device using a machine learning model for performing an action do not, alone or in combination, integrate the judicial exception into a practical application as they are presented as generic computer tools recited at a high level of generality. See MPEP 2106.04(d), 2016.05(f). Step 2B, MPEP 2106.05: As discussed above, the additional elements of Claim 20 including the use of generic computer tools such as a sensor or machine learning model do not amount to significantly more than the judicial exception. The use of these limitations merely amounts to instructions to implement the abstract idea on a computer by using generic computer tools to perform the abstract idea. See MPEP 2106.05(f). Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. This application currently names joint inventors. In considering patentability of the claims the examiner presumes that the subject matter of the various claims was commonly owned as of the effective filing date of the claimed invention(s) absent any evidence to the contrary. Applicant is advised of the obligation under 37 CFR 1.56 to point out the inventor and effective filing dates of each claim that was not commonly owned as of the effective filing date of the later invention in order for the examiner to consider the applicability of 35 U.S.C. 102(b)(2)(C) for any potential 35 U.S.C. 102(a)(2) prior art against the later invention. Claims 1-20 are rejected under 35 U.S.C. 103 as being unpatentable over Mori, Keisuke, et al. "Visual explanation by attention branch network for end-to-end learning-based self-driving." 2019 IEEE intelligent vehicles symposium (IV). IEEE, 2019 (“MORI”, previously presented), in view of Mathews et al., US 2021/0097382 A1 effective filing of 09/27/2019 (“MATHEWS”, previously presented) and in further view of Solmaz et al., US 2022/0161818 A1 effective filing 04/05/2019 (“SOLMAZ”). Regarding claim 1, MORI discloses an information processing device configured to: acquire a model having a structure of a neural network and input information input to the model (page 1578 ¶6: “To achieve complete self-driving in an end-to-end manner, we propose a network model that estimates both steering and throttle and visually explains the reason of network output. To this end, we introduce the following two ideas. One is to use the vehicle velocity as an additional input. Introducing the velocity enables consideration of both the surrounding environment and the state of a car itself, and it can control steering and throttle accurately.”, page 1578 ¶7-8: A. Steering and throttle controls by adding velocity, page 1579 Fig. 1); and generate, in real time, basis information indicating a basis for an output of the model after the input information is input to the model based on state information indicating a state of the model after the input of the input information to the model (page 1578 ¶6: “The other is to introduce the ABN framework to obtain an attention map. By introducing the ABN framework, we can estimate the control values while obtaining an attention map for visual explanation.”, page 1579 ¶1-3: B. Attention mechanism for visual explanation, Fig. 1) using at least one basis generation algorithm (page 1577 ¶5: “We build a network on the basis of an attention branch network (ABN) [7]. To apply the ABN structure to a regression problem, we propose a weighted global pooling (WGP) layer.”, ¶6: “To apply an ABN architecture for a regression problem, we propose the WGP. By introducing the WGP, we can estimate regression values from the attention branch while obtaining attention maps.”); and control a moving body based on the basis information (page 1577 ¶5: “In this paper, we tackle the above two problems and propose an end-to-end self-driving method. Our method has two characteristics. First, we estimate both steering and throttle simultaneously. To improve throttle control performance, we use the information of the car itself, i.e., the vehicle velocity, in addition to the surrounding environmental information, which can stably control throttle .. Experimental results using a driving simulator demonstrate that the proposed method can stably control steering and throttle.”) MORI fails explicitly disclose processing circuitry and/or memories to perform the steps and fails to disclose the used at least one basis generation algorithm is selected, during operation of the moving body, from among a plurality of different generation algorithms that each include a process or set of rules based on a processing situation or a characteristic. MATHEWS discloses methods for creating explainability maps (¶17-18) using known devices including processing circuitry and/or memory (¶44-49), an analogous art to the instant invention. In particular, MATHEWS discloses generating basis information for a model output by selecting a basis generation algorithm from a plurality of different basis generation algorithms each including a process or set of rules based on a processing situation or a characteristic (¶17: “Particularly, explainability can generate a heat map of an image that identifies important features of the image an AI model used to generate the output and come to a conclusion. Explainability may be implemented using gradient weighted class activation mapping (Grad-CAM), local interpretable model-agnostic explanations (LIME), and/or any other explainability technique.”, ¶33: “In the example deepfake analyzer 110 of FIG. 2, the explainability map generator 206 is implemented by Grad-CAM. However, in some examples, the explainability map generator may be implemented by LIME and/or another explainability technique.”). Therefore it would have been obvious to one having ordinary skill in the art and the teachings of MORI and MATHEWS before them before the effective filing of the claimed invention to combine the selection of a basis generation algorithm from among a plurality of different basis generation algorithms for generating basis information for an output of a model using a device comprising processing circuitry, as taught by MATHEWS, with the use of a basis generation algorithm to generation basis information for the output of the model of MORI. One would have been motivated to make this combination in order to simply substitute one known element for another to obtain predictable results, as suggested by MATHEWS (¶33: “In the example deepfake analyzer 110 of FIG. 2, the explainability map generator 206 is implemented by Grad-CAM. However, in some examples, the explainability map generator may be implemented by LIME and/or another explainability technique.”, KSR Rationale see MPEP 2143(I)(B)). MORI and MATHEWS fail to disclose wherein the selecting of the at least one basis generation algorithm is during operation of the moving body. SOLMAZ discloses methods of generate, in real time, decision information for a moving vehicle based on a processing situation (¶8). In particular, SOLMAZ discloses selecting, during operation of the vehicle, at least one basis generation algorithm (¶79, ¶89-92: “This input is also given to the safety assessment. Other than the shown inputs, other vehicle- or person-related data can also be used by the autonomous driving decision making to decide based on a set of final actions (illustrated bottom-right of FIG. 6). For simplicity, three probabilistic actions for safety are defined: [0090] 1) keeping the same pace, [0091] 2) slowing down, [0092] 3) breaking.”). Therefore it would have been obvious to one having ordinary skill in the art and the teachings of MORI, MATHEWS and SOLMAZ before the effective filing of the claimed invention to make the selection of the basis algorithm of MORI and MATHEWS during movement of a moving body such as a vehicle, as suggested by SOLMAZ. One would have been motivated to make this combination in order to provide improved safety in operation of moving bodies, as suggested by SOLMAZ (¶17, ¶21, ¶89-92). Regarding claim 2, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MORI further discloses wherein the processing circuitry is configured to generate the basis information so as to indicate a basis of processing using an output of the model (page 1579 ¶3, Fig. 1). Regarding claim 3, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MORI further discloses wherein the model is for control of a device that autonomously acts (page 1577 ¶5: “In this paper, we tackle the above two problems and propose an end-to-end self-driving method. Our method has two characteristics. First, we estimate both steering and throttle simultaneously. To improve throttle control performance, we use the information of the car itself, i.e., the vehicle velocity, in addition to the surrounding environmental information, which can stably control throttle .. Experimental results using a driving simulator demonstrate that the proposed method can stably control steering and throttle.”), and the processing circuitry is configured to generate the basis information so as to indicate a basis of the control of the device after the input information is input to the model (page 1577 ¶5: “Second, we generate an attention map, which visualizes the region in which the network is focused as a heat map.”, page 1579 Fig. 1). Regarding claim 4, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MORI further discloses wherein the model is for control of a moving body that is autonomously movable (page 1577 ¶5: “In this paper, we tackle the above two problems and propose an end-to-end self-driving method. Our method has two characteristics. First, we estimate both steering and throttle simultaneously. To improve throttle control performance, we use the information of the car itself, i.e., the vehicle velocity, in addition to the surrounding environmental information, which can stably control throttle .. Experimental results using a driving simulator demonstrate that the proposed method can stably control steering and throttle.”), and the processing circuitry is configured to generate the basis information so as to indicate a basis of the control of the moving body after the input information is input to the model (page 1577 ¶5: “Second, we generate an attention map, which visualizes the region in which the network is focused as a heat map.”, page 1579 Fig. 1). Regarding claim 5, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 4, and MORI further discloses wherein the moving body is a vehicle operating by automatic driving (page 1577 ¶5: “In this paper, we tackle the above two problems and propose an end-to-end self-driving method. Our method has two characteristics. First, we estimate both steering and throttle simultaneously. To improve throttle control performance, we use the information of the car itself, i.e., the vehicle velocity, in addition to the surrounding environmental information, which can stably control throttle .. Experimental results using a driving simulator demonstrate that the proposed method can stably control steering and throttle.”). Regarding claim 6, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 4, and MORI further discloses wherein wherein the processing circuitry is configured to generate the basis information so as to indicate a basis of a movement direction of the moving body (page 1577 ¶5: “Second, we generate an attention map, which visualizes the region in which the network is focused as a heat map.”, page 1579 Fig. 1, page 1581 ¶1: “The attention map highlights the center line, and steering is estimated as a positive value, i.e., turn to the right. The visualization mask of the same scene responds to a left side line, although a positive steering value is also estimated. In the scene of Fig. 4(b) in which the wheel must be turned to the left, the estimated steering values are negative, and the right side line is highlighted in both the attention map and visualization masks.”). Regarding claim 7, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MORI further discloses wherein the input information is sensor information detected by a sensor (page 1577 ¶2: “In this approach, we collect in-vehicle camera images and the corresponding control values when human drivers control a vehicle. By training a network with the collected data in an end-to-end manner, a vehicle can be automatically controlled in the same way a human driver would control it.”), and processing circuitry is configured to generate the basis information of the model to which the input information is input in response to the detection by the sensor (page 1577 ¶2: “In this approach, we collect in-vehicle camera images and the corresponding control values when human drivers control a vehicle. By training a network with the collected data in an end-to-end manner, a vehicle can be automatically controlled in the same way a human driver would control it.”). Regarding claim 8, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MORI further discloses wherein the input information is image information; and the model outputs a recognition result of image information in response to an input of the image information (page 1577 ¶2: “In this approach, we collect in-vehicle camera images and the corresponding control values when human drivers control a vehicle. By training a network with the collected data in an end-to-end manner, a vehicle can be automatically controlled in the same way a human driver would control it.”). Regarding claim 9, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MORI further discloses wherein the processing circuitry is configured to generate image information indicating a basis for an output of the model as the basis information (page 1579 Fig. 1(b)). Regarding claim 10, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 9, and MORI further discloses wherein the processing circuitry is configured to generate a heat map indicating a basis for an output of the model as the basis information (page 1577 ¶5: “Second, we generate an attention map, which visualizes the region in which the network is focused as a heat map.”). Regarding claim 11, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MORI further discloses wherein the model includes a convolutional neural network (CNN) (page 1579 Fig. 1(b)). Regarding claim 12, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 11, and MORI further discloses wherein the state information includes a state of a convolution layer of the model (page 1579 Fig. 1(b)). Regarding claim 13, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 12, and MORI further discloses wherein the processing circuitry is configured to generate the basis information by gradient-weighted class activation mapping (Grad-CAM) (page 1578 ¶2: “A guided backpropagation [10] and a gradient-weighted class activation mapping (Grad-CAM) [12] have been proposed. These methods obtain the maps by using only the positive values of gradients of a specific class. These are used as a general analysis method of CNNs because these can be applied to any pre-trained network models and an attention map of a specific object class.”, ¶3: “However, CAM tends to decrease classification performance because it needs to replace a fully connected layer with a convolutional layer. To resolve this problem, an attention branch network (ABN) [7] has been proposed.”). Regarding claim 14, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MORI further discloses wherein the input information is output information from another model (page 1579: Fig. 1 – back propagation), the model performs an output in response to an input of the output information from the other model (page 1579: Fig. 1 – back propagation), and the processing circuitry is configured to generate the basis information of the model to which the input information is input in response to the output from the other model (page 1579: Fig. 1). Regarding claim 15, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MORI further discloses wherein the state information includes an output result of the model after the input information is input to the model (page 1579 Fig. 1(b)). Regarding claim 16, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MATHEWS further discloses wherein the processing circuitry is configured to generate the basis information by processing related to local interpretable model-agnostic explanations (LIME) (¶17, ¶33). Regarding claim 17, MORI, MATHEWS and SOLMAZ disclose the information processing device according to claim 1, and MORI further discloses wherein the processing circuitry is configured to cause the basis information to be displayed (page 1580 ¶8: “Next, we analyze the obtained attention maps. To visualize attention maps, we built the proposed ABN by adding the attention branch structure shown in Tab.”, page 1581 Fig. 4). Regarding claim 18, claim 18 recites limitations similar to claim 1 and is similarly rejected. Regarding claim 19, claim 19 recites limitations similar to claim 1 and is similarly rejected. Regarding claim 20, recites limitations similar to claim 7 and is similarly rejected. Further, MATHEWS discloses the selection of the basis algorithm based on whether a real-time property is required, whether a locally approximated basis is desired, or whether a directionality of activating a concept is to be taken into account (¶17: local interpretable model-agnostic explanations) and MORI discloses the control of the moving body is to stop (page 1581 ¶2: “Figure 4(c) shows a scene in which a car needs to stop. In this scene, the attention map highlights a brake lamp in the front truck. And, the estimated throttle is 0, that is, the network decision makes the car stop.”). Conclusion The prior art made of record and not relied upon is considered pertinent to applicant's disclosure. Navarro, A., et al. "Using Reinforcement Learning and Simulation to Develop Autonomous Vehicle Control Strategies." SAE Technical Paper (2020): 01-0737. Glomsrud, Jon Arne, et al. "Trustworthy versus explainable AI in autonomous vessels." Proceedings of the International Seminar on Safety and Security of Autonomous Vessels (ISSAV) and European STAMP Workshop and Conference (ESWC). Vol. 37. 2019. Any inquiry concerning this communication or earlier communications from the examiner should be directed to ANDREW L TANK whose telephone number is (571)270-1692. The examiner can normally be reached Monday-Thursday 9a-6p. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Matthew Ell can be reached at 571-270-3264. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /ANDREW L TANK/Primary Examiner, Art Unit 2141
Read full office action

Prosecution Timeline

Apr 14, 2022
Application Filed
Apr 19, 2025
Non-Final Rejection — §101, §103
Jun 11, 2025
Interview Requested
Jun 18, 2025
Applicant Interview (Telephonic)
Jun 18, 2025
Examiner Interview Summary
Jul 15, 2025
Response Filed
Oct 18, 2025
Final Rejection — §101, §103
Dec 22, 2025
Request for Continued Examination
Jan 08, 2026
Response after Non-Final Action
Jan 09, 2026
Non-Final Rejection — §101, §103
Mar 31, 2026
Interview Requested
Apr 07, 2026
Applicant Interview (Telephonic)
Apr 16, 2026
Examiner Interview Summary

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12585381
ADVANCED KEYBOARD BASED SEARCH
2y 5m to grant Granted Mar 24, 2026
Patent 12585730
MANAGING MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 24, 2026
Patent 12579479
COUNTERFACTUAL SAMPLES FOR MAINTAINING CONSISTENCY BETWEEN MACHINE LEARNING MODELS
2y 5m to grant Granted Mar 17, 2026
Patent 12566998
SYSTEM, METHODS, AND PROCESSES FOR MODEL PERFORMANCE AGGREGATION
2y 5m to grant Granted Mar 03, 2026
Patent 12555037
MODEL MANAGEMENT DEVICE AND MODEL MANAGING METHOD
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
68%
Grant Probability
99%
With Interview (+31.2%)
4y 0m
Median Time to Grant
High
PTA Risk
Based on 538 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month