Prosecution Insights
Last updated: April 19, 2026
Application No. 18/386,915

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, PROGRAM, AND FLIGHT OBJECT

Final Rejection §103
Filed
Nov 03, 2023
Examiner
SOHRABY, PARDIS
Art Unit
2664
Tech Center
2600 — Communications
Assignee
Sony Group Corporation
OA Round
4 (Final)
79%
Grant Probability
Favorable
5-6
OA Rounds
2y 12m
To Grant
89%
With Interview

Examiner Intelligence

Grants 79% — above average
79%
Career Allow Rate
73 granted / 92 resolved
+17.3% vs TC avg
Moderate +10% lift
Without
With
+9.7%
Interview Lift
resolved cases with interview
Typical timeline
2y 12m
Avg Prosecution
21 currently pending
Career history
113
Total Applications
across all art units

Statute-Specific Performance

§101
14.4%
-25.6% vs TC avg
§103
58.7%
+18.7% vs TC avg
§102
16.2%
-23.8% vs TC avg
§112
9.4%
-30.6% vs TC avg
Black line = Tech Center average estimate • Based on career data from 92 resolved cases

Office Action

§103
Detailed Action Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . Information Disclosure Statement The information disclosure statement (IDS) submitted on 9/30/2025 is in compliance with the provisions of 37 CFR 1.97. Accordingly, the information disclosure statement is being considered by the examiner. Response to Amendment The amendments and associated applicant arguments/ remarks filed on 12/9/2025 were received and considered. Claims 1, 4, 10, 11, 17, and 18 have been amended. Claims 1-18 are pending. Response to Arguments Applicant's arguments filed 12/9/2025 have been fully considered but they are not persuasive. Applicant argues: specifically, on pages 10-13 of the Remarks, The prior art, Ebrahimi does not teach expand the three-dimensional observation result based on the prior map Respectfully, examiner disagrees. Ebrahimi teaches (“the processor expands the area of overlap to include a number of readings immediately before and after (or spatially adjacent) the readings within the identified overlapping area. Once an area of overlap is identified (e.g., as a bounding box of pixel positions or threshold angle of a vertical plane at which overlap starts in each field of view).” Ebrahimi, col. 7, lines 30-36) readings within the identified overlapping area is the claimed prior map. Ebrahimi also teaches (“maps are three dimensional maps, e.g., indicating the position of walls, furniture, doors, and the like in an environment being mapped. In some embodiments, maps are two dimensional maps, e.g., point clouds or polygons or finite ordered list indicating obstructions at a given height (or range of height, for instance from zero to 5 or 10 centimeters or less) above the driving surface.” Ebrahimi, col. 23, lines 62-67, col. 24, line 1) As it was mentioned in the office action dated 9/19/2025 Ebrahimi teaches expanding the observation map throughout the reference as well as Figs. 1A- 2B which clearly shows expanding the observation. Ebrahimi also teaches (“a processor of an autonomous (or semi-autonomous) vehicle considers multiple possible scenarios wherein the autonomous vehicle is located in other likely locations in addition to the location estimated by the processor. As the autonomous vehicle moves within the environment, the processor gains information of its surroundings from sensory devices which it uses to eliminate less likely scenarios. For example, consider a processor of an autonomous vehicle estimating itself to be 100 cm away from a wall. To account for measurement noise the processor considers additional likely scenarios where the vehicle is, for example, 102, 101, 99 and 98 cm away from the wall. The processor considers these scenarios as possibly being the actual true distance from the wall and therefore reduces its speed after traveling 98 cm towards the wall. If the vehicle does not bump into the wall after traveling 98 cm towards the wall it eliminates the possibility of it having been 98 cm away from the wall and the likelihood of the vehicle being 99, 100, 101 and 102 cm away from the wall increases. This way as the autonomous vehicle travels within the environment, the processor adjusts its confidence of its location with respect to other autonomous devices and the environment based on observations and information gained of the surroundings.” Ebrahimi, col. 13, lines 33-56) Ebrahimi teaches autonomous driving based on the observed environmental structure such as a distance to a wall. Claim Rejections - 35 USC § 103 In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 1-3, 5-16 are rejected under 35 U.S.C. 103 as being unpatentable over Ebrahimi Afrouzi et al. (US 11393114 B1) referred to as Ebrahimi hereinafter and further in view of Gu et al. (US 20190241263 A1) referred to as Gu hereinafter and Xu et al. (US 20200082614 A1) referred to as Xu hereinafter. Regarding claim 1, Ebrahimi teaches An information processing apparatus (“readings recorded by a processor of each autonomous vehicle” Ebrahimi, abstract) comprising: and three-dimensional distance measurement information obtained from a sensor, (“raw sensor data of a two-or-three-dimensional scene” Ebrahimi, col. 22, line 38-39) and (“FIG. 2A illustrates three depth measurement devices taking depth readings within their respective fields of view, as provided in some embodiments.” Ebrahimi, col. 4, lines 11-13) acquire a prior map corresponding to the three-dimensional real- time observation result, (“Processors of fixed sensing devices monitoring the environment and sensory devices that have previously operated within the same environment also share their readings” Ebrahimi, col. 5, lines 32-35), (“a two-or-three-dimensional scene” Ebrahimi, col. 22, lines 38-39), and (“In this example time is irrelevant and readings from the past, present and future are considered by the processor when attempting to find the best alignment between sets of readings.” Ebrahimi, col. 17, lines 50-53) align the three-dimensional observation result with the prior map, (“To construct a map including locations which were not visited by the autonomous vehicle and observed by its mounted sensing devices, processors of autonomous vehicles operating within the same environment (or which have previously operated within the same environment) share their sensor readings with one another and processors of autonomous vehicles combine their own sensory readings with readings from remote sources to construct an extended map of the environment” Ebrahimi, col. 8, lines 52-60), (“the processor of an autonomous vehicles creates a second map or places an existing (local or remote) map on top of a previously created map in a layered fashion” Ebrahimi, col. 24, lines 42-45), (“As autonomous vehicles 200, 201 and 202 continue to move within the environment processor share new depth readings and combine them to construct a map of the environment.” Ebrahimi, col. 20, lines 4-7), and (“maps are three dimensional maps, e.g., indicating the position of walls, furniture, doors, and the like in an environment being mapped. In some embodiments, maps are two dimensional maps, e.g., point clouds or polygons or finite ordered list indicating obstructions at a given height (or range of height, for instance from zero to 5 or 10 centimeters or less) above the driving surface.” Ebrahimi, col. 23, lines 62-67, col. 24, line 1) expand the three-dimensional observation result based on the prior map, and determine a route based on the three-dimensional observation result having been expanded, (“The processor of the autonomous vehicle constructs an extended map of the environment by combining readings collected locally and remotely by multiple sensing devices mounted on various autonomous vehicles positioned at different locations throughout the environment and/or fixed sensing devices monitoring the environment, allowing the autonomous vehicle to see beyond the surroundings it has discovered itself.” Ebrahimi, col. 8, lines 22-30) and (“the processor of the autonomous vehicle uses the constructed map to autonomously navigate the environment during operation, e.g., accessing the map to determine that a candidate route is blocked by an obstacle denoted in the map, to select a route with a route-finding algorithm from a current point to a target point, or the like.” Ebrahimi, col. 24, lines 8-14) wherein the prior map includes information related to recognized environmental structure including at least one of a topography, a wall, or a building, (“Some of the embodiments described herein provide processes and systems for collaborative construction of a map, floor plan, spatial model, or other topographical representation of an environment using data collected by sensing devices, such as cameras, depth measurement devices, LIDARs, sonars, or other sensing devices, mounted on autonomous or semi-autonomous vehicles, such as automobiles and robotic devices, operating within the environment and/or fixed sensing devices monitoring the environment.” Ebrahimi, col. 4, lines 46-54) and (“maps are three dimensional maps, e.g., indicating the position of walls, furniture, doors, and the like in an environment being mapped. In some embodiments, maps are two dimensional maps, e.g., point clouds or polygons or finite ordered list indicating obstructions at a given height (or range of height, for instance from zero to 5 or 10 centimeters or less) above the driving surface.” Ebrahimi, col. 23, lines 62-67, col. 24, line 1) and wherein the circuitry is further configured to expand the three-dimensional real- time observation result corresponding to the at least one of the recognized environmental structure so that an expanded area of the three-dimensional real-time result is continuous within and outside a detection range of the sensor. (“an autonomous vehicle, equipped with a depth measurement device, camera, LIDAR and sonar moves within an environment, the depth measurement device continuously taking depth readings from the depth measurement device to objects within the environment, the camera continuously taking visual readings of the environment and the sonar continuously monitoring the surrounding obstacles” Ebrahimi, col. 8, lines 39-46) However, Ebrahimi does not teach circuitry configured to generate a three-dimensional real-time observation result based on self-position estimation information Gu teaches three-dimensional real-time observation (“The UAV control unit 110 may acquire date information indicating the current date and time from a GPS receiver 240.” Gu, para. [0077]). Gu teaches comprising: circuitry configured to generate a three-dimensional real-time observation (“On the basis of images imaged by the plurality of photographing devices 230, three-dimensional spatial data around the unmanned aerial vehicle 100 may be generated.” Gu, para. [0072]) and (“The UAV control unit 110 may acquire date information indicating the current date and time from a GPS receiver 240.” Gu, para. [0077]) result based on self-position estimation information, (“The UAV control unit 110 acquires position information indicating the position of the unmanned aerial vehicle 100. The UAV control unit 110 may acquire, from the GPS receiver 240, position information indicating the latitude, longitude and altitude where the unmanned aerial vehicle 100 is located.” Gu, para. [0078]) Ebrahimi teaches determining a route (“the processor of the autonomous vehicle uses the constructed map to autonomously navigate the environment during operation, e.g., accessing the map to determine that a candidate route is blocked by an obstacle denoted in the map, to select a route with a route-finding algorithm from a current point to a target point, or the like.” Ebrahimi, col. 24, lines 8-14) And mentions drones (“Examples of vehicles include automobiles, robotic devices, all-terrain vehicles, planetary vehicles, carts, hovercraft, drone, etc.” Ebrahimi, col. 5, lines 1-2) However, Ebrahimi does not specifically teach determining a flight route, Gu teaches determining a flight route (“The flight path processing unit 111 may control the flight of the unmanned aerial vehicle 100 according to the generated flight path. The flight path processing unit 111 may make the photographing device 220 or the photographing device 230 to photograph an image of a subject at a photographing position existing in the middle of the flight path. The unmanned aerial vehicle 100 may circle the side of the subject and follow the flight path. Therefore, the photographing device 220 or the photographing device 230 may photograph the side surface of the subject at the photographing position in the flight path. Gu, para. [0100]) Ebrahimi and Gu are combinable because they are from the same field of endeavor, image processing in constructing a map/ path. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ebrahimi in light of Gu’s self-position estimation. One would have been motivated to do so because it can improve the restoration accuracy of the three-dimensional shape. (Gu, para. [0102]) However, the combination of Ebrahimi and Gu does not teach observation result corresponding to semantic segmentation that an expanded area of the three-dimensional real-time result is continuous for same semantics. Xu teaches observation result corresponding to semantic segmentation that an expanded area of the three-dimensional real-time result is continuous for same semantics. (“At step 210, semantic segmentation is performed to identify predefined semantic types in the physical environment.” Xu, para. [0061]), (“The predefined semantic types can include objects of interest and shapes of interest, such as traffic signs, lane markings, and road boundaries. In some embodiments, the system can identify a portion of the point cloud to be associated with a predefined semantic type based on physical characteristics (e.g., color, shape, pattern, dimension, irregularity or uniqueness) of the points.” Xu, para. [0061]), and (“Further, the system can identify a portion of the point cloud to be associated with a predefined semantic type based on metadata (e.g., location) of the points.” Xu, para. [0061]) Ebrahimi, Gu, and Xu are combinable because they are from the same field of endeavor, image processing and three-dimension spatial data. It would have been obvious to a person of ordinary skill in the art before the effective filing date of the claimed invention to modify Ebrahimi and Gu in light of Xu’s semantic segmentation. One would be motivated to so because aligning three-dimensional map with another dataset so that each dynamic object's relationship with the stationary objects in the physical environment (e.g., lane markings) can be identified with a high level of fidelity and dynamic scenarios including behaviors of dynamic objects (e.g., collisions) can be referenced in the 3D map dataset. (Xu, para. [0054]) Regarding claim 2, Ebrahimi teaches wherein the circuitry is configured to expand the three-dimensional observation result in an unobservable area. (“the map includes locations observed by its mounted sensing devices and hence visited by the autonomous vehicle. To construct a map including locations which were not visited by the autonomous vehicle and observed by its mounted sensing devices, processors of autonomous vehicles operating within the same environment (or which have previously operated within the same environment) share their sensor readings with one another and processors of autonomous vehicles combine their own sensory readings with readings from remote sources to construct an extended map of the environment discovering areas beyond their respective fields of view of their sensing devices” col. 8, lines 50-62) Gu teaches three-dimensional real-time observation (“The UAV control unit 110 may acquire date information indicating the current date and time from a GPS receiver 240.” Gu, para. [0077]). Regarding claim 3, Ebrahimi teaches wherein the circuitry is further configured to perform plane detection on the three-dimensional observation result, (“Once an area of overlap is identified (e.g., as a bounding box of pixel positions or threshold angle of a vertical plane at which overlap starts in each field of view).” Ebrahimi, col. 7, lines 33-36) and expand, with a result of the plane detection, a plane based on information regarding the prior map. (“a matrix containing pixel position, color, brightness, and intensity or a finite ordered list containing x, y position and norm of vectors measured from the camera to objects in a two-dimensional plane or a list containing time-of-flight of light signals emitted in a two-dimensional plane between camera and objects in the environment.” Ebrahimi, col. 23, lines 36-42) Gu teaches three-dimensional real-time observation result (“The UAV control unit 110 may acquire date information indicating the current date and time from a GPS receiver 240.” Gu, para. [0077]). Regarding claim 5, Gu teaches wherein the three-dimensional real-time observation result corresponds to a three-dimensional occupancy grid map. (“The UAV control unit 110 may refer to a three-dimensional map database to specify the position where the unmanned aerial vehicle 100 can be located in order to photograph the photography range to be photographed, and acquire such a position as position information indicating the position where the unmanned aerial vehicle 100 is to be located.” Gu, para. [0080]) Regarding claim 6, Ebrahimi teaches wherein the circuitry is configured to acquire the prior map from a different information processing apparatus through communication. (“a processor of an autonomous vehicle shares data from a previously constructed map of the environment. If applicable, as in the case of depth readings, for example, the processor of an autonomous vehicle adjusts data received from another processor of an autonomous vehicles based on its location with respect to the location of the autonomous vehicle sending the data.” Ebrahimi, col. 5, lines 35-41) Regarding claim 7, Ebrahimi teaches wherein the prior map corresponds to a map based on the three-dimensional observation result generated by the different information processing apparatus. (“an image, a map or a collection of data points. In some embodiments, combined readings are readings collected by the same sensing device or from other sensing devices operating within the same environment and/or fixed sensing devices monitoring the environment. In some embodiments, combined readings are captured at the same time or at different times.” Ebrahimi, col. 8, lines 31-38) Gu teaches three-dimensional real-time observation result (“The UAV control unit 110 may acquire date information indicating the current date and time from a GPS receiver 240.” Gu, para. [0077]). Regarding claim 8, Gu teaches wherein the prior map corresponds to a map obtained by processing of cutting the three-dimensional real-time observation result at a certain height (“In addition, by setting the vertical photographing intervals at equal intervals by the flight path processing unit 111, the photographed images photographed at each photographing positions between different flight courses are equally divided in the height direction of the subject BL.” Gu, para. [0174]) and converting the cut three-dimensional real-time observation result into a bird's-eye view. (“FIG. 7A is a plan view of the periphery of a subject viewed from the sky.” Gu, para. [0051]) Regarding claim 9, Ebrahimi teaches wherein the prior map corresponds to a map obtained by processing of reducing resolution of the three-dimensional real-time observation result to an extent enabling the communication. (“some embodiments down-res images to afford faster matching, e.g., by selecting every other, every fifth, or more or fewer vectors, or by averaging adjacent readings to form two lower-resolution versions of the images to be aligned.” Ebrahimi, col. 7, lines 23-27) Regarding claim 10, refer to the explanation of claim 1. Regarding claim 11, refer to the explanation of claim 1. Regarding claim 12, Xu teaches wherein the circuitry is configured to acquire the prior map from a different flight object from the flight object through communication. (“the one or more electronic devices include the 3D sensing device 102, the 2D sensing device 122, and additional electronic devices that are communicatively coupled with each other.” Xu, para. [0045] and fig. 1) Regarding claim 13, Gu teaches three-dimensional real-time observation result (“The UAV control unit 110 may acquire date information indicating the current date and time from a GPS receiver 240.” Gu, para. [0077]). Xu teaches wherein the prior map corresponds to a map based on the three- dimensional real-time observation result generated by the different flight object. (“With reference to FIG. 1, one or more 3D sensing devices 102 are used to capture information of a physical environment (e.g., a parking lot, a road segment) to obtain 3D data and 2D data 104.” Xu, para. [0046]) Regarding claim 14, refer to the explanation of claim 8. Regarding claim 15, refer to the explanation of claim 9. Regarding claim 16, refer to the explanation of claim 3. Allowable Subject Matter Claims 4, 17, and 18 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Conclusion Applicant's amendment necessitated the new ground(s) of rejection presented in this Office action. Accordingly, THIS ACTION IS MADE FINAL. See MPEP § 706.07(a). Applicant is reminded of the extension of time policy as set forth in 37 CFR 1.136(a). A shortened statutory period for reply to this final action is set to expire THREE MONTHS from the mailing date of this action. In the event a first reply is filed within TWO MONTHS of the mailing date of this final action and the advisory action is not mailed until after the end of the THREE-MONTH shortened statutory period, then the shortened statutory period will expire on the date the advisory action is mailed, and any nonprovisional extension fee (37 CFR 1.17(a)) pursuant to 37 CFR 1.136(a) will be calculated from the mailing date of the advisory action. In no event, however, will the statutory period for reply expire later than SIX MONTHS from the mailing date of this final action. Any inquiry concerning this communication or earlier communications from the examiner should be directed to PARDIS SOHRABY whose telephone number is (571)270-0809. The examiner can normally be reached Monday - Friday 9 am till 6pm. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Jennifer Mehmood can be reached at (571) 272-2976. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /PARDIS SOHRABY/ Examiner, Art Unit 2664 /JENNIFER MEHMOOD/ Supervisory Patent Examiner, Art Unit 2664
Read full office action

Prosecution Timeline

Nov 03, 2023
Application Filed
Nov 18, 2024
Non-Final Rejection — §103
Jan 27, 2025
Response Filed
May 21, 2025
Final Rejection — §103
Jul 02, 2025
Response after Non-Final Action
Aug 25, 2025
Request for Continued Examination
Aug 27, 2025
Response after Non-Final Action
Sep 17, 2025
Non-Final Rejection — §103
Dec 09, 2025
Response Filed
Mar 12, 2026
Final Rejection — §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12592015
PREDICTING SCATTERED SIGNAL OF X-RAY, AND CORRECTING SCATTERED BEAM
2y 5m to grant Granted Mar 31, 2026
Patent 12573236
FACIAL EXPRESSION-BASED DETECTION METHOD FOR DEEPFAKE BY GENERATIVE ARTIFICIAL INTELLIGENCE (AI)
2y 5m to grant Granted Mar 10, 2026
Patent 12567240
OPEN VOCABULARY INSTANCE SEGMENTATION WITH NOISE ESTIMATION AND ROBUST STUDENT
2y 5m to grant Granted Mar 03, 2026
Patent 12555378
IMAGE ANALYSIS SYSTEM, IMAGE ANALYSIS METHOD, AND PROGRAM
2y 5m to grant Granted Feb 17, 2026
Patent 12536666
Computer Software Module Arrangement, a Circuitry Arrangement, an Arrangement and a Method for Improved Image Processing
2y 5m to grant Granted Jan 27, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

5-6
Expected OA Rounds
79%
Grant Probability
89%
With Interview (+9.7%)
2y 12m
Median Time to Grant
High
PTA Risk
Based on 92 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month