Prosecution Insights
Last updated: April 19, 2026
Application No. 18/458,770

INFORMATION PROCESSING APPARATUS, SYSTEM, METHOD, AND STORAGE MEDIUM

Non-Final OA §103
Filed
Aug 30, 2023
Examiner
AN, IG TAI
Art Unit
3662
Tech Center
3600 — Transportation & Electronic Commerce
Assignee
Kabushiki Kaisha Toshiba
OA Round
3 (Non-Final)
56%
Grant Probability
Moderate
3-4
OA Rounds
3y 8m
To Grant
82%
With Interview

Examiner Intelligence

Grants 56% of resolved cases
56%
Career Allow Rate
292 granted / 523 resolved
+3.8% vs TC avg
Strong +26% interview lift
Without
With
+26.1%
Interview Lift
resolved cases with interview
Typical timeline
3y 8m
Avg Prosecution
32 currently pending
Career history
555
Total Applications
across all art units

Statute-Specific Performance

§101
19.3%
-20.7% vs TC avg
§103
49.8%
+9.8% vs TC avg
§102
19.0%
-21.0% vs TC avg
§112
10.2%
-29.8% vs TC avg
Black line = Tech Center average estimate • Based on career data from 523 resolved cases

Office Action

§103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Continued Examination Under 37 CFR 1.114 A request for continued examination under 37 CFR 1.114, including the fee set forth in 37 CFR 1.17(e), was filed in this application after final rejection. Since this application is eligible for continued examination under 37 CFR 1.114, and the fee set forth in 37 CFR 1.17(e) has been timely paid, the finality of the previous Office action has been withdrawn pursuant to 37 CFR 1.114. Applicant's submission filed on 20 February 2026 has been entered. Summary The Amendment filed on 20 February 2026 has been acknowledged. Claims 1 and 17 – 18 are amended. Claims 2 – 16 and 19 are cancelled. Claims 20 – 33 are newly presented. Currently, claims 1, 17 – 18 and 20 – 33 are pending and considered as set forth. Response to Amendment Applicant’s amendments to the claims are sufficient to overcome the 35 U.S.C. 101 rejections set forth in the previous office action. Response to Arguments Applicant’s arguments with respect to claims 1, and 17 – 18 have been considered but are moot because the new ground of rejection does not rely on any reference applied in the prior rejection of record for any teaching or matter specifically challenged in the argument. Claims 20 – 33 are newly presented and rejected as below. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. The factual inquiries for establishing a background for determining obviousness under 35 U.S.C. 103 are summarized as follows: 1. Determining the scope and contents of the prior art. 2. Ascertaining the differences between the prior art and the claims at issue. 3. Resolving the level of ordinary skill in the pertinent art. 4. Considering objective evidence present in the application indicating obviousness or nonobviousness. Claims 1, 17 – 18, 20 – 21, and 23 – 29 are rejected under 35 U.S.C. 103 as being unpatentable over Chen et al. (Hereinafter Chen) (US 2021/0026355 A1) in view of Nishikawa (US 2023/0155422 A1). As per claim 1, Chen teaches the limitations of: an information processing apparatus comprising: a storage (See at least paragraph 84; The methods may also be embodied as computer-usable instructions stored on computer storage media.); and a hardware processor operably coupled to the storage and configured to execute processes (See at least paragraph 84; Now referring to FIGS. 7-9, each block of methods 700, 800, and 900, described herein, comprises a computing process that may be performed using any combination of hardware, firmware, and/or software. For instance, various functions may be carried out by a processor executing instructions stored in memory. The methods may also be embodied as computer-usable instructions stored on computer storage media.) comprising: acquiring, from among information stored in advance in the storage, (i) a correspondence relationship between an object disposed in the space and information indicating a radio wave transmittance of the object which is based on a substance included at a predetermined ratio or more among substances forming the object, and (ii) movement plan information of the moving vehicle (See at least paragraph 37 and 42 – 45; firmware associated with a particular LiDAR or RADAR sensor(s) may be used to control the sensor(s) to emit light waves (for LiDAR) or radio waves (for RADAR) and detect reflections off of objects and materials in the environment to capture and/or process the sensor data 102. The sensor data 102 may include raw sensor data, point cloud data (e.g., LiDAR and/or RADAR point cloud data), and/or reflection data processed into some other format. Depending on the type of sensor, reflection data may include bearing, azimuth, elevation, range (e.g., time of beam flight), intensity, Doppler velocity, RADAR cross section (RCS), reflectivity, SNR, and/or the like. Generally, reflections and reflection characteristics may depend on the objects in the environment, speeds, materials, sensor mounting position and orientation, etc. In some cases, reflection data may be combined with position and orientation data (e.g., from GNSS and IMU sensors) to form a point cloud representing detected reflections from the environment. … images with the same or different views may be generated, with each image being input into a separate channel of the machine learning model(s) 108. By way of non-limiting example, different sensor(s) 101 (whether the same type or a different of sensor) may be used to generate image data (e.g., LiDAR range image, camera images, etc.) having the same (e.g., perspective) view of the environment in a common image space, and image data from different sensor(s) 101 or sensor modalities may be stored in separate channels of a tensor. Since image data may be evaluated as an input to the machine learning model(s) 108, there may be a tradeoff between prediction accuracy and computational demand. As such, a desired spatial dimension for an image may be selected as a design choice. Additionally or alternatively, other pre-processing 104 techniques may implement. For example, in some cases, the sensor data 102 (e.g., one or more images) may be analyzed to determine characteristics such as optical flow, and a representation of the optical flow (e.g., optical flow vectors) may be used as at least a portion of the input data 106 (e.g., stored in a corresponding channel of an input tensor). Other types of pre-processing techniques are known and contemplated within the scope of the present disclosure. In any event, one or more images, range data, reflection data, optical flow data, and/or other data may be stored and/or encoded into a suitable representation (e.g., stored in corresponding channels of the input data 106), which may serve as the input into the machine learning model(s) 108. As such, the input data 106 (e.g., one or more images) may include multiple layers, with pixel values for the different layers storing different data (e.g., values representative of color, intensity, range, reflection characteristics, and/or other types). In some embodiments, for each pixel that bins (e.g., aggregates) sensor data representing multiple reflections, a set of features may be calculated, determined, or otherwise selected from reflection characteristics of the reflections (e.g., bearing, azimuth, elevation, range, intensity, reflectivity, SNR, etc.). In some cases, when sensor data representing multiple reflections is binned together in a pixel of a projection image (e.g., a range image), sensor data representing one of the reflections (e.g., the reflection with the closest range) may be represented in the projection image and the sensor data representing the other reflections may be dropped. For example, in a range image with a pixel that bins multiple reflections together, the pixel may store a range value corresponding to the reflection with the closest range. Additionally or alternatively, when there are multiple reflections binned together in a pixel, thereby forming a tower of points, a particular feature for that pixel may be calculated by aggregating a corresponding reflection characteristic for the multiple overlapping reflections (e.g., using standard deviation, average, etc.). Generally, any given pixel may have multiple associated features values, which may be stored in corresponding channels of a tensor. In any event, the sensor data 102 may be encoded into a variety of types of the input data 106 (e.g., an image(s) captured by a camera(s), a projection image such as a range image, a tensor encoding image and range data, etc.), and the input data 106 may serve as the input into machine learning model(s) 108. At a high level, the machine learning model(s) 108 may detect objects such as instances of obstacles, static parts of the environment, and/or other objects represented in the input data 106 (e.g., a camera image, and/or other sensor data stacked into corresponding channels of an input tensor). For example, the machine learning model(s) 108 may extract classification data representing pixels that belong to certain classes of detected objects (e.g., the class confidence data 110), object instance data such as location, geometry, and/or orientation data for detected objects (e.g., the instance regression data 111), classification data representing pixels that belong to certain instances of detected objects (e.g., instance confidence data 112), and/or range values representing distances to detected objects (e.g., the depth data 113). Any or all of these data may be post-processed (e.g., via post-processing 114) to identify bounding shapes, class labels, instance labels, and/or range data for detected objects); determining whether an object having a radio wave transmittance less than a predetermined value exists between at least a part of a route along which the moving vehicle moves and the antenna, based on the map information in which the information indicating the radio wave transmittance of the object is included, the route being included in the movement plan information (See at least paragraph 59 and 76; the post-processing 114 may include a connected-component labeling 340 process to decode the instance regression data 111. For example, the instance clustering head 240 may predict a confidence map (e.g., per channel) that represents classification values (e.g., probability, score, or logit) indicating whether each pixel belongs to a particular instance. Thus, the instance clustering head 240 may predict a depth-wise probability distribution per pixel representing the likelihood that each pixel belongs to an object instance corresponding to each channel. Where each channel is assigned to identify a single instance, a connected-component analysis (e.g., connected-component labeling 340) may be performed on each confidence map to identify a region of the map corresponding to the instance (e.g., by filtering out pixels with classification values below a threshold, clustering remaining pixels, applying smoothing, etc.). In some scenarios, a single instance might be occluded in a manner that splits the instance into two distinct connected regions (e.g., an instance that is partially occluded by a pole). As such, in some embodiments, distinct connected regions that are split in a particular manner (e.g., split substantially symmetrically, split by a gap or hole smaller than a threshold distance, etc.) may be joined to form a single composite region. … Once the locations, geometry, orientations, class labels, instance labels, and/or range values for detected objects have been determined, 2D pixel coordinates defining the detected objects may be converted to 3D world coordinates for use with corresponding class labels by the autonomous vehicle in performing one or more operations (e.g., obstacle avoidance, lane keeping, lane changing, path planning, mapping, etc.). In some embodiments, a low-level perception stack that does not use a DNN may process sensor data to detect objects in parallel to the machine learning model(s) 108 (e.g., for redundancy). In any event, returning to FIG. 1, the object detections 116 (e.g., bounding boxes, closed polylines, or other bounding shapes) may be used by control component(s) of the autonomous vehicle 1000 depicted in FIGS. 10A-10D, such as an autonomous driving software stack 122 executing on one or more components of the vehicle 1000 (e.g., the SoC(s) 1004, the CPU(s) 1018, the GPU(s) 1020, etc.). For example, the vehicle 1000 may use this information (e.g., instances of obstacles) to navigate, plan, or otherwise perform one or more operations (e.g., obstacle avoidance, lane keeping, lane changing, merging, splitting, etc.) within the environment.) changing the route upon determining that the object exists (See at least paragraph 68 – 69; he world model may be used to help inform planning component(s) 128, control component(s) 130, obstacle avoidance component(s) 132, and/or actuation component(s) 134 of the drive stack 122. The obstacle perceiver may perform obstacle perception that may be based on where the vehicle 1000 is allowed to drive or is capable of driving (e.g., based on the location of the drivable or other navigable paths defined by avoiding detected obstacles), and how fast the vehicle 1000 can drive without colliding with an obstacle (e.g., an object, such as a structure, entity, vehicle, etc.) that is sensed by the sensors of the vehicle 1000 and/or the machine learning model(s) 108. The path perceiver may perform path perception, such as by perceiving nominal paths that are available in a particular situation. In some examples, the path perceiver may further take into account lane changes for path perception. A lane graph may represent the path or paths available to the vehicle 1000, and may be as simple as a single path on a highway on-ramp. In some examples, the lane graph may include paths to a desired lane and/or may indicate available changes down the highway (or other road type), or may include nearby lanes, lane changes, forks, turns, cloverleaf interchanges, merges, and/or other information); and controlling the moving vehicle such that the moving vehicle moves along the changed route (See at least paragraph 76; The control component(s) 130 may follow a trajectory or path (lateral and longitudinal) that has been received from the behavior selector (e.g., based on object detections 116) of the planning component(s) 128 as closely as possible and within the capabilities of the vehicle 1000. The control component(s) 130 may use tight feedback to handle unplanned events or behaviors that are not modeled and/or anything that causes discrepancies from the ideal (e.g., unexpected delay). In some examples, the control component(s) 130 may use a forward prediction model that takes control as an input variable, and produces predictions that may be compared with the desired state (e.g., compared with the desired lateral and longitudinal path requested by the planning component(s) 128). The control(s) that minimize discrepancy may be determined.). Chen does not explicitly teach but Nishikawa teaches limitation of: acquiring map information related to a space in which a moving vehicle moves, the map information including information indicating a position of an antenna that radiates a control signal for controlling the moving vehicle (See at least abstract and paragraph 46 and 59; an information processing device that is capable of generating a received power distribution around a shield or an obstacle. The information processing device according to the present disclosure includes a detection unit configured to detect presence of an obstacle by using a measurement value of received power measured by a measurement device that receives a wireless radio wave, an estimation unit configured to estimate a change in received power in a predetermined area before detecting presence of the obstacle and after detecting presence of the obstacle, and an updating unit configured to update a first received power distribution in the predetermined area being stored in a storage device, which is generated based on the measurement value before detecting presence of the obstacle, by using the change in received power. … The measurement of the received power in a predetermined area will be described with reference to FIG. 4. FIG. 4 illustrates a diagram viewed from above of an area having a certain space, for example, in a warehouse or a factory where an AGV moves. A plurality of rectangular figures in FIG. 4 indicate obstacles. Since radio waves are shielded by obstacles, the obstacles may be referred to as shields. The obstacle may be, for example, a post, shelf, or the like in a warehouse. A circular figure in FIG. 4 indicates a transmission source of a radio wave. The measurement device 20 moves as indicated by an arrow illustrated in FIG. 4, and measures the received power in the area illustrated in FIG. 4. In addition, the measurement device 20 may measure the received power on a path or a trajectory other than that indicated by the arrow in FIG. 4. … The estimation unit 12 simulates the received power around the newly occurring obstacle by using the position of the obstacle estimated by the detection unit 11, the position of the transmission source, transmitted power of the radio wave to be transmitted by the transmission source, an angle of an antenna in the transmission source, and the like. The estimation unit 12 may hold in advance information relating to the position of the transmission source and the transmitted power of the radio wave to be transmitted by the transmission source); It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include acquiring map information related to a space in which a moving vehicle moves, the map information including information indicating a position of an antenna that radiates a control signal for controlling the moving vehicle as taught by Nishikawa in the system of Chen, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. Regarding claims 17 – 18 and 20 – 21: Claims 17 – 18 and 20 – 21 are rejected using the same rationale, mutatis mutandis, applied to claim 1 above, respectively. As per claim 23, Chen teaches the limitations of: wherein the first region is a region corresponding to a route along which the moving vehicle moves (See at least paragraph 68 – 69). As per claim 24, the combination of Chen and Nishikawa teaches the limitations of: wherein the processes further comprise adding a result of the estimating to the map information, and outputting the map information having the evaluation result added thereto (Chen, see at least paragraph 48 – 49, and Nishikawa, see at least paragraph 53). As per claim 25, the combination of Chen and Nishikawa teaches the limitations of: wherein the information indicating the radio wave transmittance of the object is further added to the map information to be output (Chen, see at least paragraph 37, and Nishikawa, see at least paragraph 53). As per claim 26, the combination of Chen and Nishikawa teaches the limitations of: wherein the radio wave transmittance of the object is determined based on an electrical characteristic or a magnetic characteristic of the object (Chen, see at least paragraph 37 and 44, Nishikawa, see at least paragraph 59). As per claim 27, the combination of Chen and Nishikawa teaches the limitation of: wherein the electrical characteristic or the magnetic characteristic includes at least one of electrical conductivity, dielectric constant, and magnetic permeability of the object (Nishikawa, see at least paragraph 65). As per claim 28, the combination of Chen and Nishikawa teaches the limitations of: wherein the radio wave transmittance of the object is determined based on a reflection coefficient or a transmission coefficient calculated from the electrical characteristic or the magnetic characteristic of the object (Chen, see at least paragraph 37 and 44, and Nishikawa, see at least paragraph 80). As per claim 29, the combination of Chen and Nishikawa teaches the limitations of: wherein the radio wave transmittance of the object is determined based on an incident angle of the radio wave to the object or a polarized wave of the radio wave (Chen, see at least paragraph 44, and Nishikawa, see at least paragraph 59). Claims 22 and 33 are rejected under 35 U.S.C. 103 as being unpatentable over Chen and Nishikawa in view of Achour et al. (Hereinafter Achour) (US 2018/0351250 A1). As per claim 22, the combination of Chen and Nishikawa teaches all the limitation of the claimed invention but does not explicitly teach the limitations of: wherein the first region is at least one of regions obtained by dividing the space into a lattice shape. Achour teaches the limitations of: wherein the first region is at least one of regions obtained by dividing the space into a lattice shape (See at least paragraph 31 and 42). It would have been obvious to a person of ordinary skill in the art before the effective filling date of the claimed invention was made to modify using radio wave to detect object or obstacle and control the vehicle remotely of the combination of Chen and Nishikawa, to include wherein the first region is at least one of regions obtained by dividing the space into a lattice shape as taught by Achour in order to divide radio wave equally to the area and have radiation result (paragraph 32 and 41). As per claim 33, the combination of Chen, Nishikawa and Achour teaches the limitation of: wherein the radio wave transmittance is set depending on a temperature of the space (Achour, see at least paragraph 19). Claims 30 – 32 are rejected under 35 U.S.C. 103 as being unpatentable over Chen and Nishikawa in view of Gillett (US 2021/0132604 A1). As per claim 30, the combination of Chen and Nishikawa teaches the limitations of: wherein the processes further comprise: determining whether the object having the radio wave transmittance less than a predetermined value exists between at least a part of a route along which the moving vehicle moves and the antenna (Nishikawa, see at least paragraph 59); and changing the route upon determining that the object exists (Nishikawa, see at least paragraph 57 – 59). The Chen and Nishikawa does not teach, Gillett teaches the limitation of: controlling the moving vehicle such that the moving vehicle moves along the changed route. (see at least paragraph 29). It would have been obvious to one of ordinary skill in the art before the effective filing date of the invention to include controlling the moving vehicle such that the moving vehicle moves along the changed route as taught by Gillett in the system of Chen and Nishikawa, since the claimed invention is merely a combination of old elements, and in the combination each element merely would have performed the same function as it did separately, and one of ordinary skill in the art would have recognized that the results of the combination were predictable. As per claim 31, the combination of Chen, Nishikawa and Gillett teaches the limitation of: wherein the processes further comprise issuing, based on the map information and the information indicating the radio wave transmittance of the object, an instruction controlling to dispose the object far from the antenna in the space (Gillett, see at least paragraph 52). As per claim 32, the combination of Chen, Nishikawa and Gillett teaches the limitation of: and the moving vehicle, which is communicably connected to the information processing apparatus, wherein the moving vehicle is controlled based on a result of the estimating (Nishikawa, see at abstract and Gillett, see at least paragraph 50 and 52). Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to IG T AN whose telephone number is (571)270-5110. The examiner can normally be reached M - F: 10:00AM- 4:00PM. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Aniss Chad can be reached at (571) 270-3832. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. IG T AN Primary Examiner Art Unit 3662 /IG T AN/Primary Examiner, Art Unit 3662
Read full office action

Prosecution Timeline

Aug 30, 2023
Application Filed
Jun 24, 2025
Non-Final Rejection — §103
Sep 26, 2025
Response Filed
Nov 18, 2025
Final Rejection — §103
Feb 20, 2026
Request for Continued Examination
Mar 09, 2026
Response after Non-Final Action
Mar 27, 2026
Non-Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12594902
VEHICLE WITH CONTROLLED HOOD MOVEMENT
2y 5m to grant Granted Apr 07, 2026
Patent 12592171
VEHICULAR DRIVING ASSIST SYSTEM WITH HEAD UP DISPLAY
2y 5m to grant Granted Mar 31, 2026
Patent 12592067
EARLY WARNING METHOD FOR ANTI-COLLISION, VEHICLE MOUNTED DEVICE AND STORAGE MEDIUM
2y 5m to grant Granted Mar 31, 2026
Patent 12584745
DYNAMIC EASYROUTING UTILIZING ONBOARD SENSORS
2y 5m to grant Granted Mar 24, 2026
Patent 12572144
GENERATING ENVIRONMENTAL PARAMETERS BASED ON SENSOR DATA USING MACHINE LEARNING
2y 5m to grant Granted Mar 10, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

3-4
Expected OA Rounds
56%
Grant Probability
82%
With Interview (+26.1%)
3y 8m
Median Time to Grant
High
PTA Risk
Based on 523 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month