Prosecution Insights
Last updated: April 19, 2026
Application No. 18/690,744

INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD

Non-Final OA §102§103
Filed
Mar 11, 2024
Examiner
MAZUMDER, TAPAS
Art Unit
2615
Tech Center
2600 — Communications
Assignee
Sony Semiconductor Solutions Corporation
OA Round
1 (Non-Final)
82%
Grant Probability
Favorable
1-2
OA Rounds
2y 4m
To Grant
98%
With Interview

Examiner Intelligence

Grants 82% — above average
82%
Career Allow Rate
342 granted / 418 resolved
+19.8% vs TC avg
Strong +16% interview lift
Without
With
+16.2%
Interview Lift
resolved cases with interview
Typical timeline
2y 4m
Avg Prosecution
16 currently pending
Career history
434
Total Applications
across all art units

Statute-Specific Performance

§101
8.8%
-31.2% vs TC avg
§103
50.3%
+10.3% vs TC avg
§102
12.4%
-27.6% vs TC avg
§112
16.0%
-24.0% vs TC avg
Black line = Tech Center average estimate • Based on career data from 418 resolved cases

Office Action

§102 §103
DETAILED ACTION Notice of Pre-AIA or AIA Status The present application, filed on or after March 16, 2013, is being examined under the first inventor to file provisions of the AIA . In the event the determination of the status of the application as subject to AIA 35 U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any correction of the statutory basis (i.e., changing from AIA to pre-AIA ) for the rejection will not be considered a new ground of rejection if the prior art relied upon, and the rationale supporting the rejection, would be the same under either status. Claim Interpretation The following is a quotation of 35 U.S.C. 112(f): (f) Element in Claim for a Combination. – An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. The following is a quotation of pre-AIA 35 U.S.C. 112, sixth paragraph: An element in a claim for a combination may be expressed as a means or step for performing a specified function without the recital of structure, material, or acts in support thereof, and such claim shall be construed to cover the corresponding structure, material, or acts described in the specification and equivalents thereof. This application includes one or more claim limitations that do not use the word “means,” but are nonetheless being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, because the claim limitation(s) uses a generic placeholder that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholder is not preceded by a structural modifier. Such claim limitation(s) is/are: Claims 1-19 are interpreted under 35 ISC 112(f) because they recite generic place holders, display processing unit and map generation unit that is coupled with functional language without reciting sufficient structure to perform the recited function and the generic placeholders are not preceded by a structural modifier. Specification provides the hardware support for the generic place holders as CPU or processor in “ 0090] As illustrated, the server apparatus 1 has functions as a map generation unit F1, a display processing unit F2, and an AR service processing unit F3.” Therefore the generic place holders Because this/these claim limitation(s) is/are being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, it/they is/are being interpreted to cover the corresponding structure described in the specification as performing the claimed function, and equivalents thereof. If applicant does not intend to have this/these limitation(s) interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph, applicant may: (1) amend the claim limitation(s) to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph (e.g., by reciting sufficient structure to perform the claimed function); or (2) present a sufficient showing that the claim limitation(s) recite(s) sufficient structure to perform the claimed function so as to avoid it/them being interpreted under 35 U.S.C. 112(f) or pre-AIA 35 U.S.C. 112, sixth paragraph. Claim Rejections - 35 USC § 102 The following is a quotation of the appropriate paragraphs of 35 U.S.C. 102 that form the basis for the rejections under this section made in this Office action: A person shall be entitled to a patent unless – (a)(1) the claimed invention was patented, described in a printed publication, or in public use, on sale, or otherwise available to the public before the effective filing date of the claimed invention. Claim(s) 1-4, 7-10 and 20 are rejected under 35 U.S.C. 102(a)(1) as being anticipated by Yamanaka et al. ( WO 2018216341, “Yamanaka”). Regarding claim 1, Yamanaka teaches, An information processing apparatus, (Fig. 13) comprising: a display processing unit ( CPU 901) that performs display processing of map data indicating a three-dimensional structure of a target space, the map data being generated on a basis of sensing information by at least one of a visible light camera or a distance measuring sensor, ( Section 1 Schematic Configuration: 15th Paragraph: “The information processing apparatus 100 acquires depth information from the depth sensor 210, estimates the distance between the predetermined viewpoint and the real object based on the acquired depth information, and based on the estimation result, the three-dimensional information of the real object. Generate a model that reproduces the shape.” “By using a plurality of images captured from different viewpoints in this way, for example, based on the parallax between the plurality of images, a predetermined viewpoint (for example, the position of the depth sensor 210) and a subject (that is, in the image) It is possible to estimate (calculate) the distance between the object and the real object picked up. Therefore, for example, it is possible to generate a so-called depth map in which the estimation result of the distance between the predetermined viewpoint and the subject is mapped on the imaging plane.”) wherein the display processing unit (CPU 901) performs display processing of the map data on a basis of sensing information by a third sensor that is a sensor excluding the visible light camera and the distance measuring sensor. (Section 1 Schematic Configuration: 15th Paragraph: “the information processing apparatus 100 acquires polarization information from the polarization sensor 230, and corrects the generated model based on the acquired polarization information.”) Claim 20 is directed to a method and its steps are similar in scope and functions of the elements of the device claim 1 and therefore claim 20 is rejected on the same rationales as specified in the rejection of claim 1. Regarding claim 2, Yamanaka teaches, wherein the display processing unit performs processing of causing a display unit to display a map including the sensing information by the third sensor as a map indicating the three-dimensional structure of the target space. (Section 4 Hardware Configuration: 6th Paragraph “For example, the output device 917 outputs results obtained by various processes performed by the information processing apparatus 900. Specifically, the display device displays results obtained by various processes performed by the information processing device 900 as text or images”) Regarding claim 3, Yamanaka teaches wherein the display processing unit performs processing of causing a display unit to display a map including information estimated from the sensing information by the third sensor as a map indicating the three-dimensional structure of the target space. (Section 4 Hardware Configuration: 6th Paragraph: “For example, the output device 917 outputs results obtained by various processes performed by the information processing apparatus 900. Specifically, the display device displays results obtained by various processes performed by the information processing device 900 as text or images”) Regarding claim 4, Yamanaka teaches, wherein the third sensor includes a polarization camera, (Section 1 Schematic Configuration: 15th Paragraph: “the information processing apparatus 100 acquires polarization information from the polarization sensor 230, and corrects the generated model based on the acquired polarization information.”) and the display processing unit performs processing of causing the display unit to display a map including surface division information of a subject estimated from a captured image of the polarization camera as the map indicating the three-dimensional structure of the target space. ( Section 2. Study on reproduction of 3D shape model: 3rd Paragraph: “As a method for integrating a depth map into a three-dimensional space model, for example, a method of expressing a three-dimensional space as an isosurface of a distance field, as represented by so-called Kinect fusion, can be mentioned. In the method of expressing a three-dimensional space by a distance field, the space is divided into voxels and polyhedra, and each has a distance to the three-dimensional model.”) Regarding claim 7, Yamanaka teaches, a map generation unit that generates the map data on a basis of the sensing information by at least one of the visible light camera or the distance measuring sensor and the sensing information by the third sensor. (Section 1 Schematic Configuration: 15th Paragraph : “The information processing apparatus 100 acquires depth information from the depth sensor 210, estimates the distance between the predetermined viewpoint and the real object based on the acquired depth information, and based on the estimation result, the three-dimensional information of the real object. Generate a model that reproduces the shape…the information processing apparatus 100 acquires polarization information from the polarization sensor 230, and corrects the generated model based on the acquired polarization information.”) Regarding claim 8, Yamanaka teaches, wherein the third sensor includes a polarization camera, and the map generation unit generates the map data on a basis of polarization information of subject light obtained by the polarization camera. (Section 1 Schematic Configuration: 15th Paragraph : “the information processing apparatus 100 acquires polarization information from the polarization sensor 230, and corrects the generated model based on the acquired polarization information.”) Regarding claim 9, Yamanaka teaches wherein the map generation unit generates the map data on a basis of normal direction information of a subject estimated from the polarization information. (Section 3.1 Functional Configuration, 11th Configuration: “The normal estimation unit 109 acquires the polarization information from the polarization sensor 230, and based on the acquired polarization information, the normal vector on the outer surface of the real object located in the space from which the polarization information is acquired (that is, the real space). Generate a normal map with the distribution mapped.”) Regarding claim 10, Yamanaka teaches, wherein the map generation unit receives an input of distance image data obtained by the distance measuring sensor as generation source data of the map data, (“Section 2. Study on reproduction of 3D shape mode: 6th Para: “The reproducibility of the polygon mesh depends on the fineness of the space division, and in order to obtain a polygon mesh with a higher reproducibility, the data amount tends to be larger. In addition, when depth maps from multiple viewpoints are integrated, the shape reproduced as a polygon mesh and the target due to errors and noise that occur in distance measurement, errors related to estimation of the orientation of the imaging unit, etc. “) and performs reduction processing of multipath induced noise on the distance image data on a basis of surface division information of the subject estimated from the normal direction information in generation processing of the map data.(Section 2 Last Paragraph: “ In view of the above-described situation, the information processing system 1 according to the present embodiment estimates the geometric characteristics of the object based on the polarization information acquired by the polarization sensor 230, thereby generating the three-dimensional generated based on the estimation result. A shape model (for example, a polygon mesh) is corrected. For example, in FIG. 2, the right diagram shown as corrected shows an example of the corrected polygon mesh when the polygon mesh shown in the left diagram is corrected based on the technique according to the present disclosure. . With the configuration as described above, the information processing system 1 according to the present embodiment improves the reproducibility of a three-dimensional model (particularly, the reproducibility of edges and corners) and reproduces the three-dimensional shape”.) Claim Rejections - 35 USC § 103 The following is a quotation of 35 U.S.C. 103 which forms the basis for all obviousness rejections set forth in this Office action: A patent for a claimed invention may not be obtained, notwithstanding that the claimed invention is not identically disclosed as set forth in section 102, if the differences between the claimed invention and the prior art are such that the claimed invention as a whole would have been obvious before the effective filing date of the claimed invention to a person having ordinary skill in the art to which the claimed invention pertains. Patentability shall not be negated by the manner in which the invention was made. Claim(s) 5 and 19 are rejected under 35 U.S.C. 103 as being unpatentable over Yamanaka in view of Ashbrook et al. (US patent publication: 20190265038 , “Ashbrook”).. Regarding claim 5, Yamanaka doesn’t expressly teach, wherein the third sensor includes a multi spectrum camera, and the display processing unit performs processing of causing the display unit to display a map including information indicating an existence region of a specific subject estimated from a captured image of the multi spectrum camera as the map indicating the three- dimensional structure of the target space. However, Ashbrook teaches, a multi spectrum camera, and the display processing unit performs processing of causing the display unit to display a map including information indicating an existence region of a specific subject estimated from a captured image of the multi spectrum camera as the map indicating the three- dimensional structure of the target space. camera .( “[0041] Preferably, the mapping of the obtained change events comprises determining estimates of motion of the event camera.”) Ashbrook and Yamanaka are analogous as they are from the field of map generation. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Yamanaka to have included the third sensor as a multi spectrum camera, and the display processing unit performs processing of causing the display unit to display a map including information indicating an existence region of a specific subject estimated from a captured image of the multi spectrum camera as the map indicating the three- dimensional structure of the target space. camera as taught by Ashbrook. The motivation to include an alternative camera output to correct the initially captured map. Regarding claim 19, Yamanaka doesn’t expressly teach, wherein the third sensor includes an event-based vision sensor, and the map generation unit generates the map data on a basis of motion information of a subject obtained on a basis of sensing information of the event-based vision sensor. However Ashbrook teaches, an event-based vision sensor, and the map generation unit generates the map data on a basis of motion information of a subject obtained on a basis of sensing information of the event-based vision sensor.( “[0041] Preferably, the mapping of the obtained change events comprises determining estimates of motion of the event camera.”) Ashbrook and Yamanaka are analogous as they are from the field of map generation. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Yamanaka to have included the third sensor as an event-based vision sensor, and the map generation unit generates the map data on a basis of motion information of a subject obtained on a basis of sensing information of the event-based vision sensor as taught by Ashbrook. The motivation to include an alternative camera to correct the initially captured map to make a dynamic map.. Claim(s) 6 is rejected under 35 U.S.C. 103 as being unpatentable over Yamanaka in view of Aponte et al. (US patent publication: 2019/0318598, “Aponte”).. Regarding claim 6, Yamanaka doesn’t expressly teach, wherein the third sensor includes a thermal camera, and the display processing unit performs processing of causing the display unit to display a map including information indicating an existence region of a specific subject estimated from a captured image of the thermal camera as the map indicating the three-dimensional structure of the target space. However, Aponte teaches, wherein a thermal camera, and the display processing unit performs processing of causing the display unit to display a map including information indicating an existence region of a specific subject estimated from a captured image of the thermal camera as the map indicating the three-dimensional structure of the target space. ( Paragraph 0012). Aponte and Yamanaka are analogous as they are from the field of map generation. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Yamanaka to have included a third sensor as a thermal camera, and the display processing unit performs processing of causing the display unit to display a map including information indicating an existence region of a specific subject estimated from a captured image of the thermal camera as the map indicating the three-dimensional structure of the target space as taught by Aponte. The motivation to include an alternative camera output to correct the initially captured map. Claim(s) 16-18 are rejected under 35 U.S.C. 103 as being unpatentable over Yamanaka in view of Del Grande et al. (US patent publication: 2004/0183020, “Del Grande”).. Regarding claim 16, Yamanaka doesn’t expressly teach, wherein the third sensor includes a thermal camera, and the map generation unit generates the map data on a basis of temperature information of a subject obtained by the thermal camera. However, Del Grande teaches, a thermal camera, and the map generation unit generates the map data on a basis of temperature information of a subject obtained by the thermal camera (Paragraph [0029]) Del Grande and Yamanaka are analogous as they are from the field of map generation. Therefore it would have been obvious for an ordinary skilled person in the art before the effective filing date of the claimed invention to have modified Yamanaka to have included the third sensor as a thermal camera, and the map generation unit generates the map data on a basis of temperature information of a subject obtained by the thermal camera as taught by Del Grande. The motivation to include an alternative camera output to correct the initially captured map. Regarding claim 17, Yamanaka as modified by Del Grande teaches, wherein the map generation unit generates the map data on a basis of division information of an object region estimated on a basis of the temperature information. (Del Grande “[0027] The present invention provides a method for detecting an underground object surrounded by a host material, where the detection is accomplished by using thermal inertia diagnostics, which removes both surface and subsurface foreign-object clutter. The host material is analyzed using visible, temperature and thermal inertia imagery to characterize the contrasting features of the host material from those of the object.”) The motivation to include an alternative camera output to correct the initially captured map. Regarding claim 18, Yamanaka as modified by Del Grande teaches, wherein the map generation unit performs processing of removing a person portion estimated on a basis of the temperature information in generation processing of the map data. (Del Grande, “[0030] Thermal image clutter may be identified and removed by mapping the maximum minus the minimum temperature spread from coregistered day minus night, or autumn minus spring, temperature maps. Thermal image clutter is produced by foreign objects, and materials, such as: disturbed terrain, animal holes, roots, water, mud and rocks which resist diurnal and seasonal temperature changes differently than the sought-after object and host material.” Del Grande removes clutter based on temperature information. It would have been obvious for an ordinary skilled person to use the same technique to remove a person portion assuming it is a clutter based on teaching of Del Grande.) The motivation to include an alternative camera output to correct the initially captured map. Allowable Subject Matter Claims 11-15 are objected to as being dependent upon a rejected base claim, but would be allowable if rewritten in independent form including all of the limitations of the base claim and any intervening claims. Claim 11 is objected because the best combination of the prior arts fails to expressly teach, wherein the map generation unit receives an input of visible light image data obtained by the visible light camera as generation source data of the map data, and generates the map data on a basis of information of a transparent object region estimated on a basis of the polarization information. Claim 12 is objected because the best combination of the prior arts fails to expressly teach, wherein the third sensor includes a multi spectrum camera, and the map generation unit generates the map data on a basis of wavelength analysis information of subject light obtained by the multi spectrum camera. Dependent claims 13-15 are objected by virtue of dependency. Conclusion Any inquiry concerning this communication or earlier communications from the examiner should be directed to Tapas Mazumder whose telephone number is (571)270-7466. The examiner can normally be reached M-F 8:00 AM-5:00 PM PST. Examiner interviews are available via telephone, in-person, and video conferencing using a USPTO supplied web-based collaboration tool. To schedule an interview, applicant is encouraged to use the USPTO Automated Interview Request (AIR) at http://www.uspto.gov/interviewpractice. If attempts to reach the examiner by telephone are unsuccessful, the examiner’s supervisor, Alicia Harrington can be reached at 571-272-2330. The fax phone number for the organization where this application or proceeding is assigned is 571-273-8300. Information regarding the status of published or unpublished applications may be obtained from Patent Center. Unpublished application information in Patent Center is available to registered users. To file and manage patent submissions in Patent Center, visit: https://patentcenter.uspto.gov. Visit https://www.uspto.gov/patents/apply/patent-center for more information about Patent Center and https://www.uspto.gov/patents/docx for information about filing in DOCX format. For additional questions, contact the Electronic Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a USPTO Customer Service Representative, call 800-786-9199 (IN USA OR CANADA) or 571-272-1000. /TAPAS MAZUMDER/ Primary Examiner, Art Unit 2615
Read full office action

Prosecution Timeline

Mar 11, 2024
Application Filed
Sep 30, 2025
Non-Final Rejection — §102, §103 (current)

Precedent Cases

Applications granted by this same examiner with similar technology

Patent 12579763
SIGNALING POSE INFORMATION TO A SPLIT RENDERING SERVER FOR AUGMENTED REALITY COMMUNICATION SESSIONS
2y 5m to grant Granted Mar 17, 2026
Patent 12571648
GUIDANCE FOR COLLABORATIVE MAP BUILDING AND UPDATING
2y 5m to grant Granted Mar 10, 2026
Patent 12573157
SEE-THROUGH DISPLAY METHOD AND SEE-THROUGH DISPLAY SYSTEM
2y 5m to grant Granted Mar 10, 2026
Patent 12561916
INFORMATION PROCESSING APPARATUS
2y 5m to grant Granted Feb 24, 2026
Patent 12555328
VIDEO PLAYING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
2y 5m to grant Granted Feb 17, 2026
Study what changed to get past this examiner. Based on 5 most recent grants.

AI Strategy Recommendation

Get an AI-powered prosecution strategy using examiner precedents, rejection analysis, and claim mapping.
Powered by AI — typically takes 5-10 seconds

Prosecution Projections

1-2
Expected OA Rounds
82%
Grant Probability
98%
With Interview (+16.2%)
2y 4m
Median Time to Grant
Low
PTA Risk
Based on 418 resolved cases by this examiner. Grant probability derived from career allow rate.

Sign in with your work email

Enter your email to receive a magic link. No password needed.

Personal email addresses (Gmail, Yahoo, etc.) are not accepted.

Free tier: 3 strategy analyses per month